00:00:00.001 Started by upstream project "autotest-per-patch" build number 132396 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.077 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.078 The recommended git tool is: git 00:00:00.078 using credential 00000000-0000-0000-0000-000000000002 00:00:00.080 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.150 Fetching changes from the remote Git repository 00:00:00.152 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.241 Using shallow fetch with depth 1 00:00:00.241 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.241 > git --version # timeout=10 00:00:00.326 > git --version # 'git version 2.39.2' 00:00:00.326 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.389 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.389 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:10.098 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:10.111 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:10.124 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:10.124 > git config core.sparsecheckout # timeout=10 00:00:10.137 > git read-tree -mu HEAD # timeout=10 00:00:10.155 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:10.182 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:10.183 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:10.318 [Pipeline] Start of Pipeline 00:00:10.332 [Pipeline] library 00:00:10.334 Loading library shm_lib@master 00:00:10.334 Library shm_lib@master is cached. Copying from home. 00:00:10.354 [Pipeline] node 00:00:25.356 Still waiting to schedule task 00:00:25.356 Waiting for next available executor on ‘vagrant-vm-host’ 00:25:31.302 Running on VM-host-WFP7 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:25:31.305 [Pipeline] { 00:25:31.319 [Pipeline] catchError 00:25:31.322 [Pipeline] { 00:25:31.338 [Pipeline] wrap 00:25:31.351 [Pipeline] { 00:25:31.360 [Pipeline] stage 00:25:31.362 [Pipeline] { (Prologue) 00:25:31.383 [Pipeline] echo 00:25:31.385 Node: VM-host-WFP7 00:25:31.394 [Pipeline] cleanWs 00:25:31.405 [WS-CLEANUP] Deleting project workspace... 00:25:31.405 [WS-CLEANUP] Deferred wipeout is used... 00:25:31.411 [WS-CLEANUP] done 00:25:31.610 [Pipeline] setCustomBuildProperty 00:25:31.704 [Pipeline] httpRequest 00:25:32.033 [Pipeline] echo 00:25:32.037 Sorcerer 10.211.164.20 is alive 00:25:32.051 [Pipeline] retry 00:25:32.054 [Pipeline] { 00:25:32.062 [Pipeline] httpRequest 00:25:32.066 HttpMethod: GET 00:25:32.066 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:25:32.066 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:25:32.068 Response Code: HTTP/1.1 200 OK 00:25:32.068 Success: Status code 200 is in the accepted range: 200,404 00:25:32.069 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:25:32.359 [Pipeline] } 00:25:32.374 [Pipeline] // retry 00:25:32.382 [Pipeline] sh 00:25:32.660 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:25:32.676 [Pipeline] httpRequest 00:25:33.010 [Pipeline] echo 00:25:33.012 Sorcerer 10.211.164.20 is alive 00:25:33.022 [Pipeline] retry 00:25:33.023 [Pipeline] { 00:25:33.039 [Pipeline] httpRequest 00:25:33.045 HttpMethod: GET 00:25:33.046 URL: http://10.211.164.20/packages/spdk_f9d18d578e28928a879defa22dc91bc65c5666a7.tar.gz 00:25:33.047 Sending request to url: http://10.211.164.20/packages/spdk_f9d18d578e28928a879defa22dc91bc65c5666a7.tar.gz 00:25:33.047 Response Code: HTTP/1.1 200 OK 00:25:33.048 Success: Status code 200 is in the accepted range: 200,404 00:25:33.048 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk_f9d18d578e28928a879defa22dc91bc65c5666a7.tar.gz 00:25:35.654 [Pipeline] } 00:25:35.675 [Pipeline] // retry 00:25:35.685 [Pipeline] sh 00:25:36.012 + tar --no-same-owner -xf spdk_f9d18d578e28928a879defa22dc91bc65c5666a7.tar.gz 00:25:39.344 [Pipeline] sh 00:25:39.646 + git -C spdk log --oneline -n5 00:25:39.646 f9d18d578 nvmf: Set bdev_ext_io_opts::dif_check_flags_exclude_mask for read/write 00:25:39.646 a361eb5e2 nvme_spec: Add SPDK_NVME_IO_FLAGS_PRCHK_MASK 00:25:39.646 4ab755590 bdev: Insert or overwrite metadata using bounce/accel buffer if NVMe PRACT is set 00:25:39.646 f40c2e7bb dif: Add spdk_dif_pi_format_get_pi_size() to use for NVMe PRACT 00:25:39.646 325a79ea3 bdev/malloc: Support accel sequence when DIF is enabled 00:25:39.669 [Pipeline] writeFile 00:25:39.687 [Pipeline] sh 00:25:39.972 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:25:39.985 [Pipeline] sh 00:25:40.271 + cat autorun-spdk.conf 00:25:40.271 SPDK_RUN_FUNCTIONAL_TEST=1 00:25:40.271 SPDK_TEST_NVMF=1 00:25:40.271 SPDK_TEST_NVMF_TRANSPORT=tcp 00:25:40.271 SPDK_TEST_URING=1 00:25:40.271 SPDK_TEST_USDT=1 00:25:40.271 SPDK_RUN_UBSAN=1 00:25:40.271 NET_TYPE=virt 00:25:40.271 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:25:40.277 RUN_NIGHTLY=0 00:25:40.279 [Pipeline] } 00:25:40.288 [Pipeline] // stage 00:25:40.298 [Pipeline] stage 00:25:40.300 [Pipeline] { (Run VM) 00:25:40.309 [Pipeline] sh 00:25:40.586 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:25:40.586 + echo 'Start stage prepare_nvme.sh' 00:25:40.586 Start stage prepare_nvme.sh 00:25:40.586 + [[ -n 3 ]] 00:25:40.586 + disk_prefix=ex3 00:25:40.586 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 ]] 00:25:40.586 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/autorun-spdk.conf ]] 00:25:40.586 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/autorun-spdk.conf 00:25:40.586 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:25:40.586 ++ SPDK_TEST_NVMF=1 00:25:40.586 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:25:40.586 ++ SPDK_TEST_URING=1 00:25:40.586 ++ SPDK_TEST_USDT=1 00:25:40.586 ++ SPDK_RUN_UBSAN=1 00:25:40.586 ++ NET_TYPE=virt 00:25:40.586 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:25:40.586 ++ RUN_NIGHTLY=0 00:25:40.586 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:25:40.586 + nvme_files=() 00:25:40.586 + declare -A nvme_files 00:25:40.586 + backend_dir=/var/lib/libvirt/images/backends 00:25:40.586 + nvme_files['nvme.img']=5G 00:25:40.586 + nvme_files['nvme-cmb.img']=5G 00:25:40.586 + nvme_files['nvme-multi0.img']=4G 00:25:40.586 + nvme_files['nvme-multi1.img']=4G 00:25:40.586 + nvme_files['nvme-multi2.img']=4G 00:25:40.586 + nvme_files['nvme-openstack.img']=8G 00:25:40.586 + nvme_files['nvme-zns.img']=5G 00:25:40.586 + (( SPDK_TEST_NVME_PMR == 1 )) 00:25:40.587 + (( SPDK_TEST_FTL == 1 )) 00:25:40.587 + (( SPDK_TEST_NVME_FDP == 1 )) 00:25:40.587 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:25:40.587 + for nvme in "${!nvme_files[@]}" 00:25:40.587 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:25:40.587 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:25:40.587 + for nvme in "${!nvme_files[@]}" 00:25:40.587 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:25:40.587 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:25:40.587 + for nvme in "${!nvme_files[@]}" 00:25:40.587 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:25:40.587 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:25:40.587 + for nvme in "${!nvme_files[@]}" 00:25:40.587 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:25:40.587 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:25:40.587 + for nvme in "${!nvme_files[@]}" 00:25:40.587 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:25:40.587 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:25:40.587 + for nvme in "${!nvme_files[@]}" 00:25:40.587 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:25:40.587 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:25:40.587 + for nvme in "${!nvme_files[@]}" 00:25:40.587 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:25:40.846 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:25:40.846 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:25:40.846 + echo 'End stage prepare_nvme.sh' 00:25:40.846 End stage prepare_nvme.sh 00:25:40.858 [Pipeline] sh 00:25:41.140 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:25:41.140 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex3-nvme.img -b /var/lib/libvirt/images/backends/ex3-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img -H -a -v -f fedora39 00:25:41.140 00:25:41.140 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk/scripts/vagrant 00:25:41.140 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk 00:25:41.140 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:25:41.140 HELP=0 00:25:41.140 DRY_RUN=0 00:25:41.140 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme.img,/var/lib/libvirt/images/backends/ex3-nvme-multi0.img, 00:25:41.140 NVME_DISKS_TYPE=nvme,nvme, 00:25:41.140 NVME_AUTO_CREATE=0 00:25:41.140 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img, 00:25:41.140 NVME_CMB=,, 00:25:41.140 NVME_PMR=,, 00:25:41.140 NVME_ZNS=,, 00:25:41.140 NVME_MS=,, 00:25:41.140 NVME_FDP=,, 00:25:41.140 SPDK_VAGRANT_DISTRO=fedora39 00:25:41.140 SPDK_VAGRANT_VMCPU=10 00:25:41.140 SPDK_VAGRANT_VMRAM=12288 00:25:41.140 SPDK_VAGRANT_PROVIDER=libvirt 00:25:41.140 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:25:41.140 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:25:41.140 SPDK_OPENSTACK_NETWORK=0 00:25:41.140 VAGRANT_PACKAGE_BOX=0 00:25:41.140 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:25:41.140 FORCE_DISTRO=true 00:25:41.140 VAGRANT_BOX_VERSION= 00:25:41.140 EXTRA_VAGRANTFILES= 00:25:41.140 NIC_MODEL=virtio 00:25:41.140 00:25:41.140 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt' 00:25:41.140 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:25:43.678 Bringing machine 'default' up with 'libvirt' provider... 00:25:44.244 ==> default: Creating image (snapshot of base box volume). 00:25:44.503 ==> default: Creating domain with the following settings... 00:25:44.503 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732110401_9715103bb5cde310f73b 00:25:44.503 ==> default: -- Domain type: kvm 00:25:44.503 ==> default: -- Cpus: 10 00:25:44.503 ==> default: -- Feature: acpi 00:25:44.503 ==> default: -- Feature: apic 00:25:44.503 ==> default: -- Feature: pae 00:25:44.503 ==> default: -- Memory: 12288M 00:25:44.503 ==> default: -- Memory Backing: hugepages: 00:25:44.503 ==> default: -- Management MAC: 00:25:44.503 ==> default: -- Loader: 00:25:44.503 ==> default: -- Nvram: 00:25:44.503 ==> default: -- Base box: spdk/fedora39 00:25:44.503 ==> default: -- Storage pool: default 00:25:44.503 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732110401_9715103bb5cde310f73b.img (20G) 00:25:44.503 ==> default: -- Volume Cache: default 00:25:44.503 ==> default: -- Kernel: 00:25:44.503 ==> default: -- Initrd: 00:25:44.503 ==> default: -- Graphics Type: vnc 00:25:44.503 ==> default: -- Graphics Port: -1 00:25:44.503 ==> default: -- Graphics IP: 127.0.0.1 00:25:44.503 ==> default: -- Graphics Password: Not defined 00:25:44.503 ==> default: -- Video Type: cirrus 00:25:44.503 ==> default: -- Video VRAM: 9216 00:25:44.503 ==> default: -- Sound Type: 00:25:44.503 ==> default: -- Keymap: en-us 00:25:44.503 ==> default: -- TPM Path: 00:25:44.503 ==> default: -- INPUT: type=mouse, bus=ps2 00:25:44.503 ==> default: -- Command line args: 00:25:44.503 ==> default: -> value=-device, 00:25:44.503 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:25:44.503 ==> default: -> value=-drive, 00:25:44.503 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-0-drive0, 00:25:44.503 ==> default: -> value=-device, 00:25:44.503 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:25:44.503 ==> default: -> value=-device, 00:25:44.503 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:25:44.503 ==> default: -> value=-drive, 00:25:44.503 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:25:44.503 ==> default: -> value=-device, 00:25:44.503 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:25:44.503 ==> default: -> value=-drive, 00:25:44.503 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:25:44.503 ==> default: -> value=-device, 00:25:44.503 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:25:44.503 ==> default: -> value=-drive, 00:25:44.503 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:25:44.503 ==> default: -> value=-device, 00:25:44.503 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:25:44.503 ==> default: Creating shared folders metadata... 00:25:44.503 ==> default: Starting domain. 00:25:46.404 ==> default: Waiting for domain to get an IP address... 00:26:01.283 ==> default: Waiting for SSH to become available... 00:26:02.659 ==> default: Configuring and enabling network interfaces... 00:26:09.230 default: SSH address: 192.168.121.195:22 00:26:09.230 default: SSH username: vagrant 00:26:09.230 default: SSH auth method: private key 00:26:11.136 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:26:19.265 ==> default: Mounting SSHFS shared folder... 00:26:21.795 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:26:21.795 ==> default: Checking Mount.. 00:26:23.176 ==> default: Folder Successfully Mounted! 00:26:23.176 ==> default: Running provisioner: file... 00:26:24.558 default: ~/.gitconfig => .gitconfig 00:26:24.818 00:26:24.818 SUCCESS! 00:26:24.818 00:26:24.818 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:26:24.818 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:26:24.818 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:26:24.818 00:26:24.828 [Pipeline] } 00:26:24.845 [Pipeline] // stage 00:26:24.855 [Pipeline] dir 00:26:24.856 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt 00:26:24.858 [Pipeline] { 00:26:24.872 [Pipeline] catchError 00:26:24.874 [Pipeline] { 00:26:24.890 [Pipeline] sh 00:26:25.177 + vagrant ssh-config --host vagrant 00:26:25.177 + sed -ne /^Host/,$p 00:26:25.177 + tee ssh_conf 00:26:28.468 Host vagrant 00:26:28.468 HostName 192.168.121.195 00:26:28.468 User vagrant 00:26:28.468 Port 22 00:26:28.468 UserKnownHostsFile /dev/null 00:26:28.468 StrictHostKeyChecking no 00:26:28.468 PasswordAuthentication no 00:26:28.468 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:26:28.468 IdentitiesOnly yes 00:26:28.468 LogLevel FATAL 00:26:28.468 ForwardAgent yes 00:26:28.468 ForwardX11 yes 00:26:28.468 00:26:28.483 [Pipeline] withEnv 00:26:28.486 [Pipeline] { 00:26:28.499 [Pipeline] sh 00:26:28.780 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:26:28.780 source /etc/os-release 00:26:28.780 [[ -e /image.version ]] && img=$(< /image.version) 00:26:28.780 # Minimal, systemd-like check. 00:26:28.780 if [[ -e /.dockerenv ]]; then 00:26:28.780 # Clear garbage from the node's name: 00:26:28.780 # agt-er_autotest_547-896 -> autotest_547-896 00:26:28.780 # $HOSTNAME is the actual container id 00:26:28.780 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:26:28.780 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:26:28.780 # We can assume this is a mount from a host where container is running, 00:26:28.780 # so fetch its hostname to easily identify the target swarm worker. 00:26:28.780 container="$(< /etc/hostname) ($agent)" 00:26:28.780 else 00:26:28.780 # Fallback 00:26:28.780 container=$agent 00:26:28.780 fi 00:26:28.780 fi 00:26:28.780 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:26:28.780 00:26:29.054 [Pipeline] } 00:26:29.072 [Pipeline] // withEnv 00:26:29.081 [Pipeline] setCustomBuildProperty 00:26:29.096 [Pipeline] stage 00:26:29.098 [Pipeline] { (Tests) 00:26:29.115 [Pipeline] sh 00:26:29.397 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:26:29.670 [Pipeline] sh 00:26:29.953 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:26:30.229 [Pipeline] timeout 00:26:30.229 Timeout set to expire in 1 hr 0 min 00:26:30.232 [Pipeline] { 00:26:30.246 [Pipeline] sh 00:26:30.529 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:26:31.098 HEAD is now at f9d18d578 nvmf: Set bdev_ext_io_opts::dif_check_flags_exclude_mask for read/write 00:26:31.110 [Pipeline] sh 00:26:31.392 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:26:31.665 [Pipeline] sh 00:26:31.949 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:26:32.221 [Pipeline] sh 00:26:32.497 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:26:32.756 ++ readlink -f spdk_repo 00:26:32.756 + DIR_ROOT=/home/vagrant/spdk_repo 00:26:32.756 + [[ -n /home/vagrant/spdk_repo ]] 00:26:32.756 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:26:32.756 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:26:32.756 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:26:32.757 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:26:32.757 + [[ -d /home/vagrant/spdk_repo/output ]] 00:26:32.757 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:26:32.757 + cd /home/vagrant/spdk_repo 00:26:32.757 + source /etc/os-release 00:26:32.757 ++ NAME='Fedora Linux' 00:26:32.757 ++ VERSION='39 (Cloud Edition)' 00:26:32.757 ++ ID=fedora 00:26:32.757 ++ VERSION_ID=39 00:26:32.757 ++ VERSION_CODENAME= 00:26:32.757 ++ PLATFORM_ID=platform:f39 00:26:32.757 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:26:32.757 ++ ANSI_COLOR='0;38;2;60;110;180' 00:26:32.757 ++ LOGO=fedora-logo-icon 00:26:32.757 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:26:32.757 ++ HOME_URL=https://fedoraproject.org/ 00:26:32.757 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:26:32.757 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:26:32.757 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:26:32.757 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:26:32.757 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:26:32.757 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:26:32.757 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:26:32.757 ++ SUPPORT_END=2024-11-12 00:26:32.757 ++ VARIANT='Cloud Edition' 00:26:32.757 ++ VARIANT_ID=cloud 00:26:32.757 + uname -a 00:26:32.757 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:26:32.757 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:26:33.325 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:33.325 Hugepages 00:26:33.325 node hugesize free / total 00:26:33.325 node0 1048576kB 0 / 0 00:26:33.325 node0 2048kB 0 / 0 00:26:33.325 00:26:33.325 Type BDF Vendor Device NUMA Driver Device Block devices 00:26:33.325 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:26:33.325 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:26:33.325 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:26:33.325 + rm -f /tmp/spdk-ld-path 00:26:33.325 + source autorun-spdk.conf 00:26:33.325 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:26:33.325 ++ SPDK_TEST_NVMF=1 00:26:33.325 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:26:33.325 ++ SPDK_TEST_URING=1 00:26:33.325 ++ SPDK_TEST_USDT=1 00:26:33.325 ++ SPDK_RUN_UBSAN=1 00:26:33.325 ++ NET_TYPE=virt 00:26:33.325 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:26:33.325 ++ RUN_NIGHTLY=0 00:26:33.325 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:26:33.326 + [[ -n '' ]] 00:26:33.326 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:26:33.584 + for M in /var/spdk/build-*-manifest.txt 00:26:33.584 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:26:33.584 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:26:33.584 + for M in /var/spdk/build-*-manifest.txt 00:26:33.584 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:26:33.584 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:26:33.584 + for M in /var/spdk/build-*-manifest.txt 00:26:33.584 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:26:33.584 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:26:33.584 ++ uname 00:26:33.584 + [[ Linux == \L\i\n\u\x ]] 00:26:33.584 + sudo dmesg -T 00:26:33.584 + sudo dmesg --clear 00:26:33.584 + dmesg_pid=5425 00:26:33.584 + [[ Fedora Linux == FreeBSD ]] 00:26:33.584 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:26:33.584 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:26:33.584 + sudo dmesg -Tw 00:26:33.584 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:26:33.584 + [[ -x /usr/src/fio-static/fio ]] 00:26:33.584 + export FIO_BIN=/usr/src/fio-static/fio 00:26:33.584 + FIO_BIN=/usr/src/fio-static/fio 00:26:33.584 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:26:33.584 + [[ ! -v VFIO_QEMU_BIN ]] 00:26:33.584 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:26:33.584 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:26:33.584 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:26:33.584 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:26:33.584 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:26:33.584 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:26:33.584 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:26:33.584 13:47:30 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:26:33.584 13:47:30 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:26:33.584 13:47:30 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:26:33.585 13:47:30 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:26:33.585 13:47:30 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:26:33.585 13:47:30 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_URING=1 00:26:33.585 13:47:30 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_USDT=1 00:26:33.585 13:47:30 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:26:33.585 13:47:30 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 00:26:33.585 13:47:30 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:26:33.585 13:47:30 -- spdk_repo/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:26:33.585 13:47:30 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:26:33.585 13:47:30 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:26:33.845 13:47:30 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:26:33.845 13:47:30 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:33.845 13:47:30 -- scripts/common.sh@15 -- $ shopt -s extglob 00:26:33.845 13:47:30 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:26:33.845 13:47:30 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:33.845 13:47:30 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:33.845 13:47:30 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.845 13:47:30 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.845 13:47:30 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.845 13:47:30 -- paths/export.sh@5 -- $ export PATH 00:26:33.845 13:47:30 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.845 13:47:30 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:26:33.845 13:47:30 -- common/autobuild_common.sh@493 -- $ date +%s 00:26:33.845 13:47:30 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732110450.XXXXXX 00:26:33.845 13:47:30 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732110450.DdquF5 00:26:33.845 13:47:30 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:26:33.845 13:47:30 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:26:33.845 13:47:30 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:26:33.845 13:47:30 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:26:33.845 13:47:30 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:26:33.845 13:47:30 -- common/autobuild_common.sh@509 -- $ get_config_params 00:26:33.845 13:47:30 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:26:33.845 13:47:30 -- common/autotest_common.sh@10 -- $ set +x 00:26:33.845 13:47:31 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:26:33.845 13:47:31 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:26:33.845 13:47:31 -- pm/common@17 -- $ local monitor 00:26:33.845 13:47:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:33.845 13:47:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:33.845 13:47:31 -- pm/common@25 -- $ sleep 1 00:26:33.845 13:47:31 -- pm/common@21 -- $ date +%s 00:26:33.845 13:47:31 -- pm/common@21 -- $ date +%s 00:26:33.845 13:47:31 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732110451 00:26:33.845 13:47:31 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732110451 00:26:33.845 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732110451_collect-cpu-load.pm.log 00:26:33.845 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732110451_collect-vmstat.pm.log 00:26:34.784 13:47:32 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:26:34.784 13:47:32 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:26:34.784 13:47:32 -- spdk/autobuild.sh@12 -- $ umask 022 00:26:34.784 13:47:32 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:26:34.784 13:47:32 -- spdk/autobuild.sh@16 -- $ date -u 00:26:34.784 Wed Nov 20 01:47:32 PM UTC 2024 00:26:34.784 13:47:32 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:26:34.784 v25.01-pre-249-gf9d18d578 00:26:34.784 13:47:32 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:26:34.784 13:47:32 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:26:34.784 13:47:32 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:26:34.784 13:47:32 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:26:34.784 13:47:32 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:26:34.784 13:47:32 -- common/autotest_common.sh@10 -- $ set +x 00:26:34.784 ************************************ 00:26:34.784 START TEST ubsan 00:26:34.784 ************************************ 00:26:34.784 using ubsan 00:26:34.784 13:47:32 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:26:34.784 00:26:34.784 real 0m0.000s 00:26:34.784 user 0m0.000s 00:26:34.784 sys 0m0.000s 00:26:34.784 13:47:32 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:26:34.784 13:47:32 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:26:34.784 ************************************ 00:26:34.784 END TEST ubsan 00:26:34.784 ************************************ 00:26:35.043 13:47:32 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:26:35.043 13:47:32 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:26:35.043 13:47:32 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:26:35.043 13:47:32 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:26:35.043 13:47:32 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:26:35.043 13:47:32 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:26:35.043 13:47:32 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:26:35.043 13:47:32 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:26:35.043 13:47:32 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:26:35.043 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:26:35.043 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:26:35.613 Using 'verbs' RDMA provider 00:26:51.429 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:27:09.523 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:27:09.523 Creating mk/config.mk...done. 00:27:09.523 Creating mk/cc.flags.mk...done. 00:27:09.523 Type 'make' to build. 00:27:09.523 13:48:04 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:27:09.523 13:48:04 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:27:09.523 13:48:04 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:27:09.523 13:48:04 -- common/autotest_common.sh@10 -- $ set +x 00:27:09.523 ************************************ 00:27:09.523 START TEST make 00:27:09.523 ************************************ 00:27:09.523 13:48:04 make -- common/autotest_common.sh@1129 -- $ make -j10 00:27:09.523 make[1]: Nothing to be done for 'all'. 00:27:19.546 The Meson build system 00:27:19.546 Version: 1.5.0 00:27:19.546 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:27:19.547 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:27:19.547 Build type: native build 00:27:19.547 Program cat found: YES (/usr/bin/cat) 00:27:19.547 Project name: DPDK 00:27:19.547 Project version: 24.03.0 00:27:19.547 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:27:19.547 C linker for the host machine: cc ld.bfd 2.40-14 00:27:19.547 Host machine cpu family: x86_64 00:27:19.547 Host machine cpu: x86_64 00:27:19.547 Message: ## Building in Developer Mode ## 00:27:19.547 Program pkg-config found: YES (/usr/bin/pkg-config) 00:27:19.547 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:27:19.547 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:27:19.547 Program python3 found: YES (/usr/bin/python3) 00:27:19.547 Program cat found: YES (/usr/bin/cat) 00:27:19.547 Compiler for C supports arguments -march=native: YES 00:27:19.547 Checking for size of "void *" : 8 00:27:19.547 Checking for size of "void *" : 8 (cached) 00:27:19.547 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:27:19.547 Library m found: YES 00:27:19.547 Library numa found: YES 00:27:19.547 Has header "numaif.h" : YES 00:27:19.547 Library fdt found: NO 00:27:19.547 Library execinfo found: NO 00:27:19.547 Has header "execinfo.h" : YES 00:27:19.547 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:27:19.547 Run-time dependency libarchive found: NO (tried pkgconfig) 00:27:19.547 Run-time dependency libbsd found: NO (tried pkgconfig) 00:27:19.547 Run-time dependency jansson found: NO (tried pkgconfig) 00:27:19.547 Run-time dependency openssl found: YES 3.1.1 00:27:19.547 Run-time dependency libpcap found: YES 1.10.4 00:27:19.547 Has header "pcap.h" with dependency libpcap: YES 00:27:19.547 Compiler for C supports arguments -Wcast-qual: YES 00:27:19.547 Compiler for C supports arguments -Wdeprecated: YES 00:27:19.547 Compiler for C supports arguments -Wformat: YES 00:27:19.547 Compiler for C supports arguments -Wformat-nonliteral: NO 00:27:19.547 Compiler for C supports arguments -Wformat-security: NO 00:27:19.547 Compiler for C supports arguments -Wmissing-declarations: YES 00:27:19.547 Compiler for C supports arguments -Wmissing-prototypes: YES 00:27:19.547 Compiler for C supports arguments -Wnested-externs: YES 00:27:19.547 Compiler for C supports arguments -Wold-style-definition: YES 00:27:19.547 Compiler for C supports arguments -Wpointer-arith: YES 00:27:19.547 Compiler for C supports arguments -Wsign-compare: YES 00:27:19.547 Compiler for C supports arguments -Wstrict-prototypes: YES 00:27:19.547 Compiler for C supports arguments -Wundef: YES 00:27:19.547 Compiler for C supports arguments -Wwrite-strings: YES 00:27:19.547 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:27:19.547 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:27:19.547 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:27:19.547 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:27:19.547 Program objdump found: YES (/usr/bin/objdump) 00:27:19.547 Compiler for C supports arguments -mavx512f: YES 00:27:19.547 Checking if "AVX512 checking" compiles: YES 00:27:19.547 Fetching value of define "__SSE4_2__" : 1 00:27:19.547 Fetching value of define "__AES__" : 1 00:27:19.547 Fetching value of define "__AVX__" : 1 00:27:19.547 Fetching value of define "__AVX2__" : 1 00:27:19.547 Fetching value of define "__AVX512BW__" : 1 00:27:19.547 Fetching value of define "__AVX512CD__" : 1 00:27:19.547 Fetching value of define "__AVX512DQ__" : 1 00:27:19.547 Fetching value of define "__AVX512F__" : 1 00:27:19.547 Fetching value of define "__AVX512VL__" : 1 00:27:19.547 Fetching value of define "__PCLMUL__" : 1 00:27:19.547 Fetching value of define "__RDRND__" : 1 00:27:19.547 Fetching value of define "__RDSEED__" : 1 00:27:19.547 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:27:19.547 Fetching value of define "__znver1__" : (undefined) 00:27:19.547 Fetching value of define "__znver2__" : (undefined) 00:27:19.547 Fetching value of define "__znver3__" : (undefined) 00:27:19.547 Fetching value of define "__znver4__" : (undefined) 00:27:19.547 Compiler for C supports arguments -Wno-format-truncation: YES 00:27:19.547 Message: lib/log: Defining dependency "log" 00:27:19.547 Message: lib/kvargs: Defining dependency "kvargs" 00:27:19.547 Message: lib/telemetry: Defining dependency "telemetry" 00:27:19.547 Checking for function "getentropy" : NO 00:27:19.547 Message: lib/eal: Defining dependency "eal" 00:27:19.547 Message: lib/ring: Defining dependency "ring" 00:27:19.547 Message: lib/rcu: Defining dependency "rcu" 00:27:19.547 Message: lib/mempool: Defining dependency "mempool" 00:27:19.547 Message: lib/mbuf: Defining dependency "mbuf" 00:27:19.547 Fetching value of define "__PCLMUL__" : 1 (cached) 00:27:19.547 Fetching value of define "__AVX512F__" : 1 (cached) 00:27:19.547 Fetching value of define "__AVX512BW__" : 1 (cached) 00:27:19.547 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:27:19.547 Fetching value of define "__AVX512VL__" : 1 (cached) 00:27:19.547 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:27:19.547 Compiler for C supports arguments -mpclmul: YES 00:27:19.547 Compiler for C supports arguments -maes: YES 00:27:19.547 Compiler for C supports arguments -mavx512f: YES (cached) 00:27:19.547 Compiler for C supports arguments -mavx512bw: YES 00:27:19.547 Compiler for C supports arguments -mavx512dq: YES 00:27:19.547 Compiler for C supports arguments -mavx512vl: YES 00:27:19.547 Compiler for C supports arguments -mvpclmulqdq: YES 00:27:19.547 Compiler for C supports arguments -mavx2: YES 00:27:19.547 Compiler for C supports arguments -mavx: YES 00:27:19.547 Message: lib/net: Defining dependency "net" 00:27:19.547 Message: lib/meter: Defining dependency "meter" 00:27:19.547 Message: lib/ethdev: Defining dependency "ethdev" 00:27:19.547 Message: lib/pci: Defining dependency "pci" 00:27:19.547 Message: lib/cmdline: Defining dependency "cmdline" 00:27:19.547 Message: lib/hash: Defining dependency "hash" 00:27:19.547 Message: lib/timer: Defining dependency "timer" 00:27:19.547 Message: lib/compressdev: Defining dependency "compressdev" 00:27:19.547 Message: lib/cryptodev: Defining dependency "cryptodev" 00:27:19.547 Message: lib/dmadev: Defining dependency "dmadev" 00:27:19.547 Compiler for C supports arguments -Wno-cast-qual: YES 00:27:19.547 Message: lib/power: Defining dependency "power" 00:27:19.547 Message: lib/reorder: Defining dependency "reorder" 00:27:19.547 Message: lib/security: Defining dependency "security" 00:27:19.547 Has header "linux/userfaultfd.h" : YES 00:27:19.547 Has header "linux/vduse.h" : YES 00:27:19.547 Message: lib/vhost: Defining dependency "vhost" 00:27:19.547 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:27:19.547 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:27:19.547 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:27:19.547 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:27:19.547 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:27:19.547 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:27:19.547 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:27:19.547 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:27:19.547 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:27:19.547 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:27:19.547 Program doxygen found: YES (/usr/local/bin/doxygen) 00:27:19.547 Configuring doxy-api-html.conf using configuration 00:27:19.547 Configuring doxy-api-man.conf using configuration 00:27:19.547 Program mandb found: YES (/usr/bin/mandb) 00:27:19.547 Program sphinx-build found: NO 00:27:19.547 Configuring rte_build_config.h using configuration 00:27:19.548 Message: 00:27:19.548 ================= 00:27:19.548 Applications Enabled 00:27:19.548 ================= 00:27:19.548 00:27:19.548 apps: 00:27:19.548 00:27:19.548 00:27:19.548 Message: 00:27:19.548 ================= 00:27:19.548 Libraries Enabled 00:27:19.548 ================= 00:27:19.548 00:27:19.548 libs: 00:27:19.548 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:27:19.548 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:27:19.548 cryptodev, dmadev, power, reorder, security, vhost, 00:27:19.548 00:27:19.548 Message: 00:27:19.548 =============== 00:27:19.548 Drivers Enabled 00:27:19.548 =============== 00:27:19.548 00:27:19.548 common: 00:27:19.548 00:27:19.548 bus: 00:27:19.548 pci, vdev, 00:27:19.548 mempool: 00:27:19.548 ring, 00:27:19.548 dma: 00:27:19.548 00:27:19.548 net: 00:27:19.548 00:27:19.548 crypto: 00:27:19.548 00:27:19.548 compress: 00:27:19.548 00:27:19.548 vdpa: 00:27:19.548 00:27:19.548 00:27:19.548 Message: 00:27:19.548 ================= 00:27:19.548 Content Skipped 00:27:19.548 ================= 00:27:19.548 00:27:19.548 apps: 00:27:19.548 dumpcap: explicitly disabled via build config 00:27:19.548 graph: explicitly disabled via build config 00:27:19.548 pdump: explicitly disabled via build config 00:27:19.548 proc-info: explicitly disabled via build config 00:27:19.548 test-acl: explicitly disabled via build config 00:27:19.548 test-bbdev: explicitly disabled via build config 00:27:19.548 test-cmdline: explicitly disabled via build config 00:27:19.548 test-compress-perf: explicitly disabled via build config 00:27:19.548 test-crypto-perf: explicitly disabled via build config 00:27:19.548 test-dma-perf: explicitly disabled via build config 00:27:19.548 test-eventdev: explicitly disabled via build config 00:27:19.548 test-fib: explicitly disabled via build config 00:27:19.548 test-flow-perf: explicitly disabled via build config 00:27:19.548 test-gpudev: explicitly disabled via build config 00:27:19.548 test-mldev: explicitly disabled via build config 00:27:19.548 test-pipeline: explicitly disabled via build config 00:27:19.548 test-pmd: explicitly disabled via build config 00:27:19.548 test-regex: explicitly disabled via build config 00:27:19.548 test-sad: explicitly disabled via build config 00:27:19.548 test-security-perf: explicitly disabled via build config 00:27:19.548 00:27:19.548 libs: 00:27:19.548 argparse: explicitly disabled via build config 00:27:19.548 metrics: explicitly disabled via build config 00:27:19.548 acl: explicitly disabled via build config 00:27:19.548 bbdev: explicitly disabled via build config 00:27:19.548 bitratestats: explicitly disabled via build config 00:27:19.548 bpf: explicitly disabled via build config 00:27:19.548 cfgfile: explicitly disabled via build config 00:27:19.548 distributor: explicitly disabled via build config 00:27:19.548 efd: explicitly disabled via build config 00:27:19.548 eventdev: explicitly disabled via build config 00:27:19.548 dispatcher: explicitly disabled via build config 00:27:19.548 gpudev: explicitly disabled via build config 00:27:19.548 gro: explicitly disabled via build config 00:27:19.548 gso: explicitly disabled via build config 00:27:19.548 ip_frag: explicitly disabled via build config 00:27:19.548 jobstats: explicitly disabled via build config 00:27:19.548 latencystats: explicitly disabled via build config 00:27:19.548 lpm: explicitly disabled via build config 00:27:19.548 member: explicitly disabled via build config 00:27:19.548 pcapng: explicitly disabled via build config 00:27:19.548 rawdev: explicitly disabled via build config 00:27:19.548 regexdev: explicitly disabled via build config 00:27:19.548 mldev: explicitly disabled via build config 00:27:19.548 rib: explicitly disabled via build config 00:27:19.548 sched: explicitly disabled via build config 00:27:19.548 stack: explicitly disabled via build config 00:27:19.548 ipsec: explicitly disabled via build config 00:27:19.548 pdcp: explicitly disabled via build config 00:27:19.548 fib: explicitly disabled via build config 00:27:19.548 port: explicitly disabled via build config 00:27:19.548 pdump: explicitly disabled via build config 00:27:19.548 table: explicitly disabled via build config 00:27:19.548 pipeline: explicitly disabled via build config 00:27:19.548 graph: explicitly disabled via build config 00:27:19.548 node: explicitly disabled via build config 00:27:19.548 00:27:19.548 drivers: 00:27:19.548 common/cpt: not in enabled drivers build config 00:27:19.548 common/dpaax: not in enabled drivers build config 00:27:19.548 common/iavf: not in enabled drivers build config 00:27:19.548 common/idpf: not in enabled drivers build config 00:27:19.548 common/ionic: not in enabled drivers build config 00:27:19.548 common/mvep: not in enabled drivers build config 00:27:19.548 common/octeontx: not in enabled drivers build config 00:27:19.548 bus/auxiliary: not in enabled drivers build config 00:27:19.548 bus/cdx: not in enabled drivers build config 00:27:19.548 bus/dpaa: not in enabled drivers build config 00:27:19.548 bus/fslmc: not in enabled drivers build config 00:27:19.548 bus/ifpga: not in enabled drivers build config 00:27:19.548 bus/platform: not in enabled drivers build config 00:27:19.548 bus/uacce: not in enabled drivers build config 00:27:19.548 bus/vmbus: not in enabled drivers build config 00:27:19.548 common/cnxk: not in enabled drivers build config 00:27:19.548 common/mlx5: not in enabled drivers build config 00:27:19.548 common/nfp: not in enabled drivers build config 00:27:19.548 common/nitrox: not in enabled drivers build config 00:27:19.548 common/qat: not in enabled drivers build config 00:27:19.548 common/sfc_efx: not in enabled drivers build config 00:27:19.548 mempool/bucket: not in enabled drivers build config 00:27:19.548 mempool/cnxk: not in enabled drivers build config 00:27:19.548 mempool/dpaa: not in enabled drivers build config 00:27:19.548 mempool/dpaa2: not in enabled drivers build config 00:27:19.548 mempool/octeontx: not in enabled drivers build config 00:27:19.548 mempool/stack: not in enabled drivers build config 00:27:19.548 dma/cnxk: not in enabled drivers build config 00:27:19.548 dma/dpaa: not in enabled drivers build config 00:27:19.548 dma/dpaa2: not in enabled drivers build config 00:27:19.548 dma/hisilicon: not in enabled drivers build config 00:27:19.548 dma/idxd: not in enabled drivers build config 00:27:19.548 dma/ioat: not in enabled drivers build config 00:27:19.548 dma/skeleton: not in enabled drivers build config 00:27:19.548 net/af_packet: not in enabled drivers build config 00:27:19.548 net/af_xdp: not in enabled drivers build config 00:27:19.548 net/ark: not in enabled drivers build config 00:27:19.548 net/atlantic: not in enabled drivers build config 00:27:19.548 net/avp: not in enabled drivers build config 00:27:19.548 net/axgbe: not in enabled drivers build config 00:27:19.548 net/bnx2x: not in enabled drivers build config 00:27:19.548 net/bnxt: not in enabled drivers build config 00:27:19.548 net/bonding: not in enabled drivers build config 00:27:19.548 net/cnxk: not in enabled drivers build config 00:27:19.548 net/cpfl: not in enabled drivers build config 00:27:19.548 net/cxgbe: not in enabled drivers build config 00:27:19.548 net/dpaa: not in enabled drivers build config 00:27:19.548 net/dpaa2: not in enabled drivers build config 00:27:19.548 net/e1000: not in enabled drivers build config 00:27:19.548 net/ena: not in enabled drivers build config 00:27:19.548 net/enetc: not in enabled drivers build config 00:27:19.548 net/enetfec: not in enabled drivers build config 00:27:19.548 net/enic: not in enabled drivers build config 00:27:19.548 net/failsafe: not in enabled drivers build config 00:27:19.548 net/fm10k: not in enabled drivers build config 00:27:19.548 net/gve: not in enabled drivers build config 00:27:19.548 net/hinic: not in enabled drivers build config 00:27:19.548 net/hns3: not in enabled drivers build config 00:27:19.548 net/i40e: not in enabled drivers build config 00:27:19.548 net/iavf: not in enabled drivers build config 00:27:19.549 net/ice: not in enabled drivers build config 00:27:19.549 net/idpf: not in enabled drivers build config 00:27:19.549 net/igc: not in enabled drivers build config 00:27:19.549 net/ionic: not in enabled drivers build config 00:27:19.549 net/ipn3ke: not in enabled drivers build config 00:27:19.549 net/ixgbe: not in enabled drivers build config 00:27:19.549 net/mana: not in enabled drivers build config 00:27:19.549 net/memif: not in enabled drivers build config 00:27:19.549 net/mlx4: not in enabled drivers build config 00:27:19.549 net/mlx5: not in enabled drivers build config 00:27:19.549 net/mvneta: not in enabled drivers build config 00:27:19.549 net/mvpp2: not in enabled drivers build config 00:27:19.549 net/netvsc: not in enabled drivers build config 00:27:19.549 net/nfb: not in enabled drivers build config 00:27:19.549 net/nfp: not in enabled drivers build config 00:27:19.549 net/ngbe: not in enabled drivers build config 00:27:19.549 net/null: not in enabled drivers build config 00:27:19.549 net/octeontx: not in enabled drivers build config 00:27:19.549 net/octeon_ep: not in enabled drivers build config 00:27:19.549 net/pcap: not in enabled drivers build config 00:27:19.549 net/pfe: not in enabled drivers build config 00:27:19.549 net/qede: not in enabled drivers build config 00:27:19.549 net/ring: not in enabled drivers build config 00:27:19.549 net/sfc: not in enabled drivers build config 00:27:19.549 net/softnic: not in enabled drivers build config 00:27:19.549 net/tap: not in enabled drivers build config 00:27:19.549 net/thunderx: not in enabled drivers build config 00:27:19.549 net/txgbe: not in enabled drivers build config 00:27:19.549 net/vdev_netvsc: not in enabled drivers build config 00:27:19.549 net/vhost: not in enabled drivers build config 00:27:19.549 net/virtio: not in enabled drivers build config 00:27:19.549 net/vmxnet3: not in enabled drivers build config 00:27:19.549 raw/*: missing internal dependency, "rawdev" 00:27:19.549 crypto/armv8: not in enabled drivers build config 00:27:19.549 crypto/bcmfs: not in enabled drivers build config 00:27:19.549 crypto/caam_jr: not in enabled drivers build config 00:27:19.549 crypto/ccp: not in enabled drivers build config 00:27:19.549 crypto/cnxk: not in enabled drivers build config 00:27:19.549 crypto/dpaa_sec: not in enabled drivers build config 00:27:19.549 crypto/dpaa2_sec: not in enabled drivers build config 00:27:19.549 crypto/ipsec_mb: not in enabled drivers build config 00:27:19.549 crypto/mlx5: not in enabled drivers build config 00:27:19.549 crypto/mvsam: not in enabled drivers build config 00:27:19.549 crypto/nitrox: not in enabled drivers build config 00:27:19.549 crypto/null: not in enabled drivers build config 00:27:19.549 crypto/octeontx: not in enabled drivers build config 00:27:19.549 crypto/openssl: not in enabled drivers build config 00:27:19.549 crypto/scheduler: not in enabled drivers build config 00:27:19.549 crypto/uadk: not in enabled drivers build config 00:27:19.549 crypto/virtio: not in enabled drivers build config 00:27:19.549 compress/isal: not in enabled drivers build config 00:27:19.549 compress/mlx5: not in enabled drivers build config 00:27:19.549 compress/nitrox: not in enabled drivers build config 00:27:19.549 compress/octeontx: not in enabled drivers build config 00:27:19.549 compress/zlib: not in enabled drivers build config 00:27:19.549 regex/*: missing internal dependency, "regexdev" 00:27:19.549 ml/*: missing internal dependency, "mldev" 00:27:19.549 vdpa/ifc: not in enabled drivers build config 00:27:19.549 vdpa/mlx5: not in enabled drivers build config 00:27:19.549 vdpa/nfp: not in enabled drivers build config 00:27:19.549 vdpa/sfc: not in enabled drivers build config 00:27:19.549 event/*: missing internal dependency, "eventdev" 00:27:19.549 baseband/*: missing internal dependency, "bbdev" 00:27:19.549 gpu/*: missing internal dependency, "gpudev" 00:27:19.549 00:27:19.549 00:27:19.549 Build targets in project: 85 00:27:19.549 00:27:19.549 DPDK 24.03.0 00:27:19.549 00:27:19.549 User defined options 00:27:19.549 buildtype : debug 00:27:19.549 default_library : shared 00:27:19.549 libdir : lib 00:27:19.549 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:27:19.549 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:27:19.549 c_link_args : 00:27:19.549 cpu_instruction_set: native 00:27:19.549 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:27:19.549 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:27:19.549 enable_docs : false 00:27:19.549 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:27:19.549 enable_kmods : false 00:27:19.549 max_lcores : 128 00:27:19.549 tests : false 00:27:19.549 00:27:19.549 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:27:19.549 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:27:19.549 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:27:19.549 [2/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:27:19.549 [3/268] Linking static target lib/librte_log.a 00:27:19.549 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:27:19.809 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:27:19.809 [6/268] Linking static target lib/librte_kvargs.a 00:27:20.068 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:27:20.068 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:27:20.068 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:27:20.068 [10/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:27:20.068 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:27:20.068 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:27:20.068 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:27:20.068 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:27:20.068 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:27:20.327 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:27:20.327 [17/268] Linking static target lib/librte_telemetry.a 00:27:20.327 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:27:20.584 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:27:20.584 [20/268] Linking target lib/librte_log.so.24.1 00:27:20.584 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:27:20.842 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:27:20.842 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:27:20.842 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:27:20.842 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:27:20.842 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:27:20.842 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:27:20.842 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:27:20.842 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:27:20.842 [30/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:27:20.842 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:27:21.101 [32/268] Linking target lib/librte_kvargs.so.24.1 00:27:21.101 [33/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:27:21.101 [34/268] Linking target lib/librte_telemetry.so.24.1 00:27:21.101 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:27:21.360 [36/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:27:21.360 [37/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:27:21.360 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:27:21.360 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:27:21.360 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:27:21.619 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:27:21.619 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:27:21.619 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:27:21.619 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:27:21.619 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:27:21.619 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:27:21.619 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:27:21.620 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:27:21.620 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:27:21.878 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:27:21.878 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:27:22.137 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:27:22.137 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:27:22.137 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:27:22.137 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:27:22.137 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:27:22.137 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:27:22.396 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:27:22.396 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:27:22.396 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:27:22.396 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:27:22.654 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:27:22.654 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:27:22.654 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:27:22.654 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:27:22.654 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:27:22.654 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:27:22.912 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:27:22.912 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:27:22.912 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:27:22.912 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:27:23.170 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:27:23.170 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:27:23.170 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:27:23.170 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:27:23.170 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:27:23.170 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:27:23.428 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:27:23.428 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:27:23.428 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:27:23.428 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:27:23.685 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:27:23.686 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:27:23.686 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:27:23.686 [85/268] Linking static target lib/librte_ring.a 00:27:23.686 [86/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:27:23.943 [87/268] Linking static target lib/librte_eal.a 00:27:23.943 [88/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:27:23.943 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:27:23.943 [90/268] Linking static target lib/librte_rcu.a 00:27:23.943 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:27:23.943 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:27:24.201 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:27:24.201 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:27:24.201 [95/268] Linking static target lib/librte_mempool.a 00:27:24.201 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:27:24.201 [97/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:27:24.201 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:27:24.459 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:27:24.459 [100/268] Linking static target lib/librte_mbuf.a 00:27:24.459 [101/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:27:24.459 [102/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:27:24.459 [103/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:27:24.459 [104/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:27:24.459 [105/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:27:24.716 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:27:24.716 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:27:24.716 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:27:24.716 [109/268] Linking static target lib/librte_net.a 00:27:24.974 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:27:24.974 [111/268] Linking static target lib/librte_meter.a 00:27:24.974 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:27:24.974 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:27:25.232 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:27:25.232 [115/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:27:25.232 [116/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:27:25.232 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:27:25.490 [118/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:27:25.490 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:27:25.749 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:27:25.749 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:27:25.749 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:27:26.006 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:27:26.006 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:27:26.006 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:27:26.264 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:27:26.264 [127/268] Linking static target lib/librte_pci.a 00:27:26.264 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:27:26.264 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:27:26.264 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:27:26.264 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:27:26.264 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:27:26.522 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:27:26.523 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:27:26.523 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:27:26.523 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:27:26.523 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:27:26.523 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:27:26.523 [139/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:27:26.523 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:27:26.523 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:27:26.523 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:27:26.523 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:27:26.781 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:27:26.781 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:27:26.781 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:27:26.781 [147/268] Linking static target lib/librte_ethdev.a 00:27:26.781 [148/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:27:26.781 [149/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:27:26.781 [150/268] Linking static target lib/librte_cmdline.a 00:27:27.039 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:27:27.039 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:27:27.039 [153/268] Linking static target lib/librte_timer.a 00:27:27.298 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:27:27.298 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:27:27.298 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:27:27.298 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:27:27.556 [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:27:27.556 [159/268] Linking static target lib/librte_hash.a 00:27:27.556 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:27:27.556 [161/268] Linking static target lib/librte_compressdev.a 00:27:27.556 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:27:27.815 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:27:27.815 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:27:27.815 [165/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:27:27.815 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:27:27.815 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:27:28.074 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:27:28.074 [169/268] Linking static target lib/librte_dmadev.a 00:27:28.074 [170/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:27:28.074 [171/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:27:28.333 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:27:28.333 [173/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:27:28.333 [174/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:27:28.333 [175/268] Linking static target lib/librte_cryptodev.a 00:27:28.593 [176/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:27:28.593 [177/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:27:28.593 [178/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:27:28.593 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:27:28.593 [180/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:27:28.852 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:27:28.852 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:27:28.852 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:27:28.852 [184/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:27:29.111 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:27:29.111 [186/268] Linking static target lib/librte_power.a 00:27:29.111 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:27:29.111 [188/268] Linking static target lib/librte_reorder.a 00:27:29.370 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:27:29.370 [190/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:27:29.370 [191/268] Linking static target lib/librte_security.a 00:27:29.370 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:27:29.370 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:27:29.629 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:27:29.888 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:27:30.146 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:27:30.146 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:27:30.146 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:27:30.146 [199/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:27:30.146 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:27:30.405 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:27:30.405 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:27:30.664 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:27:30.664 [204/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:27:30.664 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:27:30.664 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:27:30.664 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:27:30.924 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:27:30.924 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:27:30.924 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:27:30.924 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:27:31.182 [212/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:27:31.183 [213/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:27:31.183 [214/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:27:31.183 [215/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:27:31.183 [216/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:27:31.183 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:27:31.183 [218/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:27:31.183 [219/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:27:31.183 [220/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:27:31.183 [221/268] Linking static target drivers/librte_bus_pci.a 00:27:31.183 [222/268] Linking static target drivers/librte_bus_vdev.a 00:27:31.441 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:27:31.441 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:27:31.441 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:27:31.441 [226/268] Linking static target drivers/librte_mempool_ring.a 00:27:31.441 [227/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:27:31.699 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:27:32.265 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:27:32.265 [230/268] Linking static target lib/librte_vhost.a 00:27:34.168 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:27:34.427 [232/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:27:34.427 [233/268] Linking target lib/librte_eal.so.24.1 00:27:34.685 [234/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:27:34.685 [235/268] Linking target lib/librte_dmadev.so.24.1 00:27:34.685 [236/268] Linking target lib/librte_meter.so.24.1 00:27:34.685 [237/268] Linking target lib/librte_pci.so.24.1 00:27:34.685 [238/268] Linking target lib/librte_ring.so.24.1 00:27:34.685 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:27:34.685 [240/268] Linking target lib/librte_timer.so.24.1 00:27:34.685 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:27:34.685 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:27:34.685 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:27:34.685 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:27:34.685 [245/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:27:34.942 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:27:34.942 [247/268] Linking target lib/librte_mempool.so.24.1 00:27:34.942 [248/268] Linking target lib/librte_rcu.so.24.1 00:27:34.942 [249/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:27:34.942 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:27:34.942 [251/268] Linking target drivers/librte_mempool_ring.so.24.1 00:27:34.942 [252/268] Linking target lib/librte_mbuf.so.24.1 00:27:35.201 [253/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:27:35.201 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:27:35.201 [255/268] Linking target lib/librte_compressdev.so.24.1 00:27:35.201 [256/268] Linking target lib/librte_net.so.24.1 00:27:35.201 [257/268] Linking target lib/librte_reorder.so.24.1 00:27:35.460 [258/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:27:35.461 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:27:35.461 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:27:35.461 [261/268] Linking target lib/librte_security.so.24.1 00:27:35.461 [262/268] Linking target lib/librte_hash.so.24.1 00:27:35.461 [263/268] Linking target lib/librte_cmdline.so.24.1 00:27:35.461 [264/268] Linking target lib/librte_ethdev.so.24.1 00:27:35.461 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:27:35.461 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:27:35.719 [267/268] Linking target lib/librte_power.so.24.1 00:27:35.719 [268/268] Linking target lib/librte_vhost.so.24.1 00:27:35.719 INFO: autodetecting backend as ninja 00:27:35.719 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:28:02.272 CC lib/ut/ut.o 00:28:02.272 CC lib/ut_mock/mock.o 00:28:02.272 CC lib/log/log_flags.o 00:28:02.272 CC lib/log/log.o 00:28:02.272 CC lib/log/log_deprecated.o 00:28:02.272 LIB libspdk_ut.a 00:28:02.272 LIB libspdk_ut_mock.a 00:28:02.272 LIB libspdk_log.a 00:28:02.272 SO libspdk_ut.so.2.0 00:28:02.272 SO libspdk_ut_mock.so.6.0 00:28:02.272 SO libspdk_log.so.7.1 00:28:02.272 SYMLINK libspdk_ut.so 00:28:02.272 SYMLINK libspdk_ut_mock.so 00:28:02.272 SYMLINK libspdk_log.so 00:28:02.272 CC lib/ioat/ioat.o 00:28:02.272 CC lib/dma/dma.o 00:28:02.272 CXX lib/trace_parser/trace.o 00:28:02.272 CC lib/util/base64.o 00:28:02.272 CC lib/util/bit_array.o 00:28:02.272 CC lib/util/crc16.o 00:28:02.272 CC lib/util/cpuset.o 00:28:02.272 CC lib/util/crc32.o 00:28:02.272 CC lib/util/crc32c.o 00:28:02.272 CC lib/vfio_user/host/vfio_user_pci.o 00:28:02.272 CC lib/util/crc32_ieee.o 00:28:02.272 CC lib/util/crc64.o 00:28:02.272 CC lib/util/dif.o 00:28:02.272 CC lib/vfio_user/host/vfio_user.o 00:28:02.272 CC lib/util/fd.o 00:28:02.272 CC lib/util/fd_group.o 00:28:02.272 LIB libspdk_dma.a 00:28:02.272 LIB libspdk_ioat.a 00:28:02.272 CC lib/util/file.o 00:28:02.272 CC lib/util/hexlify.o 00:28:02.272 CC lib/util/iov.o 00:28:02.272 SO libspdk_dma.so.5.0 00:28:02.272 CC lib/util/math.o 00:28:02.272 SO libspdk_ioat.so.7.0 00:28:02.272 LIB libspdk_vfio_user.a 00:28:02.272 SYMLINK libspdk_dma.so 00:28:02.272 CC lib/util/net.o 00:28:02.272 SYMLINK libspdk_ioat.so 00:28:02.272 CC lib/util/pipe.o 00:28:02.272 SO libspdk_vfio_user.so.5.0 00:28:02.272 CC lib/util/strerror_tls.o 00:28:02.272 SYMLINK libspdk_vfio_user.so 00:28:02.272 CC lib/util/string.o 00:28:02.272 CC lib/util/uuid.o 00:28:02.272 CC lib/util/xor.o 00:28:02.272 CC lib/util/zipf.o 00:28:02.272 CC lib/util/md5.o 00:28:02.272 LIB libspdk_util.a 00:28:02.272 SO libspdk_util.so.10.1 00:28:02.272 LIB libspdk_trace_parser.a 00:28:02.272 SO libspdk_trace_parser.so.6.0 00:28:02.272 SYMLINK libspdk_util.so 00:28:02.272 SYMLINK libspdk_trace_parser.so 00:28:02.272 CC lib/env_dpdk/env.o 00:28:02.272 CC lib/env_dpdk/pci.o 00:28:02.272 CC lib/env_dpdk/memory.o 00:28:02.272 CC lib/env_dpdk/threads.o 00:28:02.272 CC lib/env_dpdk/init.o 00:28:02.272 CC lib/idxd/idxd.o 00:28:02.272 CC lib/json/json_parse.o 00:28:02.272 CC lib/vmd/vmd.o 00:28:02.272 CC lib/rdma_utils/rdma_utils.o 00:28:02.272 CC lib/conf/conf.o 00:28:02.272 CC lib/env_dpdk/pci_ioat.o 00:28:02.530 CC lib/json/json_util.o 00:28:02.530 LIB libspdk_conf.a 00:28:02.530 SO libspdk_conf.so.6.0 00:28:02.530 LIB libspdk_rdma_utils.a 00:28:02.530 CC lib/vmd/led.o 00:28:02.530 SO libspdk_rdma_utils.so.1.0 00:28:02.530 SYMLINK libspdk_conf.so 00:28:02.530 CC lib/env_dpdk/pci_virtio.o 00:28:02.530 CC lib/idxd/idxd_user.o 00:28:02.530 CC lib/env_dpdk/pci_vmd.o 00:28:02.530 SYMLINK libspdk_rdma_utils.so 00:28:02.530 CC lib/env_dpdk/pci_idxd.o 00:28:02.789 CC lib/json/json_write.o 00:28:02.789 CC lib/env_dpdk/pci_event.o 00:28:02.789 CC lib/env_dpdk/sigbus_handler.o 00:28:02.789 CC lib/idxd/idxd_kernel.o 00:28:02.789 CC lib/env_dpdk/pci_dpdk.o 00:28:02.789 CC lib/env_dpdk/pci_dpdk_2207.o 00:28:02.789 CC lib/env_dpdk/pci_dpdk_2211.o 00:28:02.789 CC lib/rdma_provider/common.o 00:28:02.789 CC lib/rdma_provider/rdma_provider_verbs.o 00:28:02.789 LIB libspdk_vmd.a 00:28:03.049 LIB libspdk_idxd.a 00:28:03.049 SO libspdk_vmd.so.6.0 00:28:03.049 LIB libspdk_json.a 00:28:03.049 SO libspdk_idxd.so.12.1 00:28:03.049 SYMLINK libspdk_vmd.so 00:28:03.049 SO libspdk_json.so.6.0 00:28:03.049 SYMLINK libspdk_idxd.so 00:28:03.049 SYMLINK libspdk_json.so 00:28:03.049 LIB libspdk_rdma_provider.a 00:28:03.308 SO libspdk_rdma_provider.so.7.0 00:28:03.308 SYMLINK libspdk_rdma_provider.so 00:28:03.308 CC lib/jsonrpc/jsonrpc_server.o 00:28:03.308 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:28:03.308 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:28:03.308 CC lib/jsonrpc/jsonrpc_client.o 00:28:03.567 LIB libspdk_env_dpdk.a 00:28:03.567 SO libspdk_env_dpdk.so.15.1 00:28:03.567 LIB libspdk_jsonrpc.a 00:28:03.826 SO libspdk_jsonrpc.so.6.0 00:28:03.826 SYMLINK libspdk_env_dpdk.so 00:28:03.826 SYMLINK libspdk_jsonrpc.so 00:28:04.395 CC lib/rpc/rpc.o 00:28:04.395 LIB libspdk_rpc.a 00:28:04.395 SO libspdk_rpc.so.6.0 00:28:04.656 SYMLINK libspdk_rpc.so 00:28:04.945 CC lib/trace/trace_flags.o 00:28:04.945 CC lib/trace/trace.o 00:28:04.945 CC lib/trace/trace_rpc.o 00:28:04.945 CC lib/notify/notify_rpc.o 00:28:04.945 CC lib/notify/notify.o 00:28:04.945 CC lib/keyring/keyring.o 00:28:04.945 CC lib/keyring/keyring_rpc.o 00:28:04.945 LIB libspdk_notify.a 00:28:05.213 SO libspdk_notify.so.6.0 00:28:05.213 LIB libspdk_keyring.a 00:28:05.213 SYMLINK libspdk_notify.so 00:28:05.213 LIB libspdk_trace.a 00:28:05.213 SO libspdk_keyring.so.2.0 00:28:05.213 SO libspdk_trace.so.11.0 00:28:05.213 SYMLINK libspdk_keyring.so 00:28:05.213 SYMLINK libspdk_trace.so 00:28:05.781 CC lib/thread/thread.o 00:28:05.781 CC lib/thread/iobuf.o 00:28:05.781 CC lib/sock/sock.o 00:28:05.781 CC lib/sock/sock_rpc.o 00:28:06.040 LIB libspdk_sock.a 00:28:06.040 SO libspdk_sock.so.10.0 00:28:06.300 SYMLINK libspdk_sock.so 00:28:06.559 CC lib/nvme/nvme_ctrlr_cmd.o 00:28:06.559 CC lib/nvme/nvme_ctrlr.o 00:28:06.559 CC lib/nvme/nvme_fabric.o 00:28:06.559 CC lib/nvme/nvme_ns_cmd.o 00:28:06.559 CC lib/nvme/nvme.o 00:28:06.559 CC lib/nvme/nvme_pcie_common.o 00:28:06.559 CC lib/nvme/nvme_ns.o 00:28:06.559 CC lib/nvme/nvme_pcie.o 00:28:06.559 CC lib/nvme/nvme_qpair.o 00:28:07.128 LIB libspdk_thread.a 00:28:07.128 SO libspdk_thread.so.11.0 00:28:07.128 SYMLINK libspdk_thread.so 00:28:07.128 CC lib/nvme/nvme_quirks.o 00:28:07.389 CC lib/nvme/nvme_transport.o 00:28:07.389 CC lib/nvme/nvme_discovery.o 00:28:07.389 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:28:07.389 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:28:07.389 CC lib/nvme/nvme_tcp.o 00:28:07.389 CC lib/nvme/nvme_opal.o 00:28:07.389 CC lib/nvme/nvme_io_msg.o 00:28:07.648 CC lib/nvme/nvme_poll_group.o 00:28:07.648 CC lib/nvme/nvme_zns.o 00:28:07.908 CC lib/nvme/nvme_stubs.o 00:28:07.908 CC lib/nvme/nvme_auth.o 00:28:07.908 CC lib/nvme/nvme_cuse.o 00:28:07.908 CC lib/nvme/nvme_rdma.o 00:28:08.168 CC lib/accel/accel.o 00:28:08.168 CC lib/accel/accel_rpc.o 00:28:08.168 CC lib/blob/blobstore.o 00:28:08.168 CC lib/blob/request.o 00:28:08.429 CC lib/blob/zeroes.o 00:28:08.429 CC lib/blob/blob_bs_dev.o 00:28:08.429 CC lib/accel/accel_sw.o 00:28:08.696 CC lib/init/json_config.o 00:28:08.696 CC lib/init/subsystem.o 00:28:08.696 CC lib/init/subsystem_rpc.o 00:28:08.696 CC lib/virtio/virtio.o 00:28:08.696 CC lib/virtio/virtio_vhost_user.o 00:28:08.696 CC lib/fsdev/fsdev.o 00:28:08.955 CC lib/virtio/virtio_vfio_user.o 00:28:08.955 CC lib/init/rpc.o 00:28:08.955 CC lib/virtio/virtio_pci.o 00:28:08.955 CC lib/fsdev/fsdev_io.o 00:28:09.215 CC lib/fsdev/fsdev_rpc.o 00:28:09.215 LIB libspdk_init.a 00:28:09.215 SO libspdk_init.so.6.0 00:28:09.215 LIB libspdk_accel.a 00:28:09.215 SYMLINK libspdk_init.so 00:28:09.215 SO libspdk_accel.so.16.0 00:28:09.215 LIB libspdk_nvme.a 00:28:09.215 LIB libspdk_virtio.a 00:28:09.215 SYMLINK libspdk_accel.so 00:28:09.474 SO libspdk_virtio.so.7.0 00:28:09.474 SO libspdk_nvme.so.15.0 00:28:09.474 SYMLINK libspdk_virtio.so 00:28:09.474 LIB libspdk_fsdev.a 00:28:09.474 CC lib/event/log_rpc.o 00:28:09.474 CC lib/event/app_rpc.o 00:28:09.474 CC lib/event/app.o 00:28:09.474 CC lib/event/scheduler_static.o 00:28:09.474 CC lib/event/reactor.o 00:28:09.474 SO libspdk_fsdev.so.2.0 00:28:09.474 SYMLINK libspdk_fsdev.so 00:28:09.475 CC lib/bdev/bdev.o 00:28:09.475 CC lib/bdev/bdev_rpc.o 00:28:09.733 SYMLINK libspdk_nvme.so 00:28:09.734 CC lib/bdev/bdev_zone.o 00:28:09.734 CC lib/bdev/part.o 00:28:09.734 CC lib/bdev/scsi_nvme.o 00:28:09.734 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:28:09.734 LIB libspdk_event.a 00:28:09.991 SO libspdk_event.so.14.0 00:28:09.991 SYMLINK libspdk_event.so 00:28:10.248 LIB libspdk_fuse_dispatcher.a 00:28:10.248 SO libspdk_fuse_dispatcher.so.1.0 00:28:10.506 SYMLINK libspdk_fuse_dispatcher.so 00:28:11.075 LIB libspdk_blob.a 00:28:11.075 SO libspdk_blob.so.11.0 00:28:11.334 SYMLINK libspdk_blob.so 00:28:11.593 CC lib/blobfs/blobfs.o 00:28:11.593 CC lib/blobfs/tree.o 00:28:11.593 CC lib/lvol/lvol.o 00:28:12.185 LIB libspdk_bdev.a 00:28:12.185 SO libspdk_bdev.so.17.0 00:28:12.449 LIB libspdk_blobfs.a 00:28:12.449 SYMLINK libspdk_bdev.so 00:28:12.449 SO libspdk_blobfs.so.10.0 00:28:12.449 SYMLINK libspdk_blobfs.so 00:28:12.449 LIB libspdk_lvol.a 00:28:12.449 SO libspdk_lvol.so.10.0 00:28:12.449 CC lib/ublk/ublk.o 00:28:12.449 CC lib/ublk/ublk_rpc.o 00:28:12.449 CC lib/scsi/dev.o 00:28:12.449 CC lib/scsi/lun.o 00:28:12.449 CC lib/nvmf/ctrlr.o 00:28:12.449 CC lib/scsi/port.o 00:28:12.449 SYMLINK libspdk_lvol.so 00:28:12.449 CC lib/nvmf/ctrlr_discovery.o 00:28:12.449 CC lib/nvmf/ctrlr_bdev.o 00:28:12.449 CC lib/nbd/nbd.o 00:28:12.449 CC lib/ftl/ftl_core.o 00:28:12.708 CC lib/ftl/ftl_init.o 00:28:12.708 CC lib/ftl/ftl_layout.o 00:28:12.708 CC lib/ftl/ftl_debug.o 00:28:12.708 CC lib/scsi/scsi.o 00:28:12.968 CC lib/scsi/scsi_bdev.o 00:28:12.968 CC lib/nbd/nbd_rpc.o 00:28:12.968 CC lib/scsi/scsi_pr.o 00:28:12.968 CC lib/scsi/scsi_rpc.o 00:28:12.968 CC lib/ftl/ftl_io.o 00:28:12.968 CC lib/ftl/ftl_sb.o 00:28:12.968 CC lib/nvmf/subsystem.o 00:28:12.968 LIB libspdk_nbd.a 00:28:13.227 SO libspdk_nbd.so.7.0 00:28:13.227 CC lib/nvmf/nvmf.o 00:28:13.227 LIB libspdk_ublk.a 00:28:13.227 SYMLINK libspdk_nbd.so 00:28:13.227 CC lib/nvmf/nvmf_rpc.o 00:28:13.227 CC lib/nvmf/transport.o 00:28:13.227 SO libspdk_ublk.so.3.0 00:28:13.227 CC lib/ftl/ftl_l2p.o 00:28:13.227 CC lib/scsi/task.o 00:28:13.227 CC lib/nvmf/tcp.o 00:28:13.227 SYMLINK libspdk_ublk.so 00:28:13.227 CC lib/nvmf/stubs.o 00:28:13.227 CC lib/ftl/ftl_l2p_flat.o 00:28:13.485 CC lib/nvmf/mdns_server.o 00:28:13.485 LIB libspdk_scsi.a 00:28:13.485 SO libspdk_scsi.so.9.0 00:28:13.485 CC lib/ftl/ftl_nv_cache.o 00:28:13.485 SYMLINK libspdk_scsi.so 00:28:13.485 CC lib/ftl/ftl_band.o 00:28:13.755 CC lib/nvmf/rdma.o 00:28:13.755 CC lib/nvmf/auth.o 00:28:14.014 CC lib/ftl/ftl_band_ops.o 00:28:14.014 CC lib/iscsi/conn.o 00:28:14.014 CC lib/iscsi/init_grp.o 00:28:14.015 CC lib/iscsi/iscsi.o 00:28:14.274 CC lib/iscsi/param.o 00:28:14.274 CC lib/vhost/vhost.o 00:28:14.274 CC lib/iscsi/portal_grp.o 00:28:14.274 CC lib/ftl/ftl_writer.o 00:28:14.274 CC lib/ftl/ftl_rq.o 00:28:14.534 CC lib/iscsi/tgt_node.o 00:28:14.534 CC lib/vhost/vhost_rpc.o 00:28:14.534 CC lib/vhost/vhost_scsi.o 00:28:14.534 CC lib/vhost/vhost_blk.o 00:28:14.534 CC lib/ftl/ftl_reloc.o 00:28:14.534 CC lib/vhost/rte_vhost_user.o 00:28:14.792 CC lib/iscsi/iscsi_subsystem.o 00:28:14.792 CC lib/iscsi/iscsi_rpc.o 00:28:14.792 CC lib/ftl/ftl_l2p_cache.o 00:28:15.051 CC lib/iscsi/task.o 00:28:15.051 CC lib/ftl/ftl_p2l.o 00:28:15.051 CC lib/ftl/ftl_p2l_log.o 00:28:15.310 CC lib/ftl/mngt/ftl_mngt.o 00:28:15.310 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:28:15.310 LIB libspdk_iscsi.a 00:28:15.310 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:28:15.310 SO libspdk_iscsi.so.8.0 00:28:15.310 CC lib/ftl/mngt/ftl_mngt_startup.o 00:28:15.310 CC lib/ftl/mngt/ftl_mngt_md.o 00:28:15.310 CC lib/ftl/mngt/ftl_mngt_misc.o 00:28:15.569 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:28:15.569 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:28:15.569 CC lib/ftl/mngt/ftl_mngt_band.o 00:28:15.569 LIB libspdk_nvmf.a 00:28:15.569 LIB libspdk_vhost.a 00:28:15.569 SYMLINK libspdk_iscsi.so 00:28:15.569 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:28:15.569 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:28:15.569 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:28:15.569 SO libspdk_vhost.so.8.0 00:28:15.569 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:28:15.569 SO libspdk_nvmf.so.20.0 00:28:15.569 CC lib/ftl/utils/ftl_conf.o 00:28:15.827 CC lib/ftl/utils/ftl_md.o 00:28:15.827 SYMLINK libspdk_vhost.so 00:28:15.827 CC lib/ftl/utils/ftl_mempool.o 00:28:15.827 CC lib/ftl/utils/ftl_bitmap.o 00:28:15.827 CC lib/ftl/utils/ftl_property.o 00:28:15.827 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:28:15.827 SYMLINK libspdk_nvmf.so 00:28:15.827 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:28:15.827 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:28:15.827 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:28:15.827 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:28:15.827 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:28:15.827 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:28:16.092 CC lib/ftl/upgrade/ftl_sb_v3.o 00:28:16.092 CC lib/ftl/upgrade/ftl_sb_v5.o 00:28:16.092 CC lib/ftl/nvc/ftl_nvc_dev.o 00:28:16.092 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:28:16.092 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:28:16.092 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:28:16.092 CC lib/ftl/base/ftl_base_dev.o 00:28:16.092 CC lib/ftl/base/ftl_base_bdev.o 00:28:16.092 CC lib/ftl/ftl_trace.o 00:28:16.351 LIB libspdk_ftl.a 00:28:16.610 SO libspdk_ftl.so.9.0 00:28:16.868 SYMLINK libspdk_ftl.so 00:28:17.129 CC module/env_dpdk/env_dpdk_rpc.o 00:28:17.129 CC module/scheduler/dynamic/scheduler_dynamic.o 00:28:17.129 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:28:17.129 CC module/accel/error/accel_error.o 00:28:17.129 CC module/scheduler/gscheduler/gscheduler.o 00:28:17.129 CC module/accel/ioat/accel_ioat.o 00:28:17.129 CC module/blob/bdev/blob_bdev.o 00:28:17.129 CC module/fsdev/aio/fsdev_aio.o 00:28:17.129 CC module/keyring/file/keyring.o 00:28:17.389 CC module/sock/posix/posix.o 00:28:17.389 LIB libspdk_env_dpdk_rpc.a 00:28:17.389 SO libspdk_env_dpdk_rpc.so.6.0 00:28:17.389 SYMLINK libspdk_env_dpdk_rpc.so 00:28:17.389 CC module/fsdev/aio/fsdev_aio_rpc.o 00:28:17.389 CC module/keyring/file/keyring_rpc.o 00:28:17.389 LIB libspdk_scheduler_gscheduler.a 00:28:17.389 LIB libspdk_scheduler_dpdk_governor.a 00:28:17.389 SO libspdk_scheduler_gscheduler.so.4.0 00:28:17.389 SO libspdk_scheduler_dpdk_governor.so.4.0 00:28:17.389 LIB libspdk_scheduler_dynamic.a 00:28:17.389 CC module/accel/ioat/accel_ioat_rpc.o 00:28:17.389 CC module/accel/error/accel_error_rpc.o 00:28:17.389 SO libspdk_scheduler_dynamic.so.4.0 00:28:17.389 SYMLINK libspdk_scheduler_gscheduler.so 00:28:17.389 SYMLINK libspdk_scheduler_dpdk_governor.so 00:28:17.389 CC module/fsdev/aio/linux_aio_mgr.o 00:28:17.389 SYMLINK libspdk_scheduler_dynamic.so 00:28:17.648 LIB libspdk_blob_bdev.a 00:28:17.648 LIB libspdk_keyring_file.a 00:28:17.648 SO libspdk_blob_bdev.so.11.0 00:28:17.648 SO libspdk_keyring_file.so.2.0 00:28:17.648 LIB libspdk_accel_ioat.a 00:28:17.648 LIB libspdk_accel_error.a 00:28:17.648 SO libspdk_accel_ioat.so.6.0 00:28:17.648 SYMLINK libspdk_blob_bdev.so 00:28:17.648 SYMLINK libspdk_keyring_file.so 00:28:17.648 SO libspdk_accel_error.so.2.0 00:28:17.648 SYMLINK libspdk_accel_ioat.so 00:28:17.648 CC module/accel/dsa/accel_dsa.o 00:28:17.648 CC module/accel/dsa/accel_dsa_rpc.o 00:28:17.648 SYMLINK libspdk_accel_error.so 00:28:17.648 CC module/accel/iaa/accel_iaa.o 00:28:17.648 CC module/accel/iaa/accel_iaa_rpc.o 00:28:17.648 CC module/sock/uring/uring.o 00:28:17.907 CC module/keyring/linux/keyring.o 00:28:17.907 LIB libspdk_fsdev_aio.a 00:28:17.907 CC module/bdev/delay/vbdev_delay.o 00:28:17.907 LIB libspdk_accel_iaa.a 00:28:17.907 SO libspdk_fsdev_aio.so.1.0 00:28:17.907 CC module/blobfs/bdev/blobfs_bdev.o 00:28:17.907 SO libspdk_accel_iaa.so.3.0 00:28:17.907 LIB libspdk_sock_posix.a 00:28:17.907 LIB libspdk_accel_dsa.a 00:28:17.907 SYMLINK libspdk_fsdev_aio.so 00:28:17.907 CC module/bdev/error/vbdev_error.o 00:28:17.907 CC module/bdev/error/vbdev_error_rpc.o 00:28:17.907 CC module/bdev/gpt/gpt.o 00:28:17.907 SO libspdk_accel_dsa.so.5.0 00:28:17.907 SO libspdk_sock_posix.so.6.0 00:28:17.907 SYMLINK libspdk_accel_iaa.so 00:28:17.907 CC module/keyring/linux/keyring_rpc.o 00:28:17.907 CC module/bdev/delay/vbdev_delay_rpc.o 00:28:18.165 SYMLINK libspdk_accel_dsa.so 00:28:18.165 SYMLINK libspdk_sock_posix.so 00:28:18.165 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:28:18.165 CC module/bdev/gpt/vbdev_gpt.o 00:28:18.165 LIB libspdk_keyring_linux.a 00:28:18.165 SO libspdk_keyring_linux.so.1.0 00:28:18.165 CC module/bdev/lvol/vbdev_lvol.o 00:28:18.165 SYMLINK libspdk_keyring_linux.so 00:28:18.165 LIB libspdk_bdev_delay.a 00:28:18.165 LIB libspdk_bdev_error.a 00:28:18.165 SO libspdk_bdev_delay.so.6.0 00:28:18.165 LIB libspdk_blobfs_bdev.a 00:28:18.165 SO libspdk_bdev_error.so.6.0 00:28:18.423 SO libspdk_blobfs_bdev.so.6.0 00:28:18.423 CC module/bdev/malloc/bdev_malloc.o 00:28:18.423 SYMLINK libspdk_bdev_delay.so 00:28:18.423 CC module/bdev/null/bdev_null.o 00:28:18.423 SYMLINK libspdk_bdev_error.so 00:28:18.423 SYMLINK libspdk_blobfs_bdev.so 00:28:18.423 CC module/bdev/malloc/bdev_malloc_rpc.o 00:28:18.423 CC module/bdev/nvme/bdev_nvme.o 00:28:18.423 LIB libspdk_bdev_gpt.a 00:28:18.423 CC module/bdev/passthru/vbdev_passthru.o 00:28:18.423 LIB libspdk_sock_uring.a 00:28:18.423 SO libspdk_bdev_gpt.so.6.0 00:28:18.423 SO libspdk_sock_uring.so.5.0 00:28:18.423 SYMLINK libspdk_bdev_gpt.so 00:28:18.423 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:28:18.423 CC module/bdev/raid/bdev_raid.o 00:28:18.423 SYMLINK libspdk_sock_uring.so 00:28:18.423 CC module/bdev/raid/bdev_raid_rpc.o 00:28:18.423 CC module/bdev/split/vbdev_split.o 00:28:18.682 CC module/bdev/null/bdev_null_rpc.o 00:28:18.682 LIB libspdk_bdev_malloc.a 00:28:18.682 LIB libspdk_bdev_passthru.a 00:28:18.682 SO libspdk_bdev_malloc.so.6.0 00:28:18.682 CC module/bdev/zone_block/vbdev_zone_block.o 00:28:18.682 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:28:18.682 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:28:18.682 SO libspdk_bdev_passthru.so.6.0 00:28:18.682 CC module/bdev/split/vbdev_split_rpc.o 00:28:18.682 SYMLINK libspdk_bdev_malloc.so 00:28:18.682 LIB libspdk_bdev_null.a 00:28:18.682 SYMLINK libspdk_bdev_passthru.so 00:28:18.682 CC module/bdev/raid/bdev_raid_sb.o 00:28:18.682 CC module/bdev/raid/raid0.o 00:28:18.941 SO libspdk_bdev_null.so.6.0 00:28:18.941 CC module/bdev/uring/bdev_uring.o 00:28:18.941 SYMLINK libspdk_bdev_null.so 00:28:18.941 CC module/bdev/nvme/bdev_nvme_rpc.o 00:28:18.941 LIB libspdk_bdev_split.a 00:28:18.941 SO libspdk_bdev_split.so.6.0 00:28:18.941 CC module/bdev/aio/bdev_aio.o 00:28:18.941 SYMLINK libspdk_bdev_split.so 00:28:18.941 LIB libspdk_bdev_zone_block.a 00:28:18.941 CC module/bdev/aio/bdev_aio_rpc.o 00:28:18.941 LIB libspdk_bdev_lvol.a 00:28:18.941 CC module/bdev/uring/bdev_uring_rpc.o 00:28:19.199 SO libspdk_bdev_zone_block.so.6.0 00:28:19.199 SO libspdk_bdev_lvol.so.6.0 00:28:19.199 SYMLINK libspdk_bdev_zone_block.so 00:28:19.199 SYMLINK libspdk_bdev_lvol.so 00:28:19.199 CC module/bdev/nvme/nvme_rpc.o 00:28:19.199 CC module/bdev/ftl/bdev_ftl.o 00:28:19.199 CC module/bdev/nvme/bdev_mdns_client.o 00:28:19.199 LIB libspdk_bdev_uring.a 00:28:19.199 SO libspdk_bdev_uring.so.6.0 00:28:19.458 CC module/bdev/virtio/bdev_virtio_scsi.o 00:28:19.458 SYMLINK libspdk_bdev_uring.so 00:28:19.458 CC module/bdev/raid/raid1.o 00:28:19.458 CC module/bdev/iscsi/bdev_iscsi.o 00:28:19.458 LIB libspdk_bdev_aio.a 00:28:19.458 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:28:19.458 SO libspdk_bdev_aio.so.6.0 00:28:19.458 CC module/bdev/nvme/vbdev_opal.o 00:28:19.458 CC module/bdev/nvme/vbdev_opal_rpc.o 00:28:19.458 CC module/bdev/raid/concat.o 00:28:19.458 CC module/bdev/ftl/bdev_ftl_rpc.o 00:28:19.458 SYMLINK libspdk_bdev_aio.so 00:28:19.458 CC module/bdev/virtio/bdev_virtio_blk.o 00:28:19.716 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:28:19.716 CC module/bdev/virtio/bdev_virtio_rpc.o 00:28:19.716 LIB libspdk_bdev_raid.a 00:28:19.716 LIB libspdk_bdev_ftl.a 00:28:19.716 LIB libspdk_bdev_iscsi.a 00:28:19.716 SO libspdk_bdev_raid.so.6.0 00:28:19.716 SO libspdk_bdev_ftl.so.6.0 00:28:19.716 SO libspdk_bdev_iscsi.so.6.0 00:28:19.716 SYMLINK libspdk_bdev_ftl.so 00:28:19.716 SYMLINK libspdk_bdev_raid.so 00:28:20.024 SYMLINK libspdk_bdev_iscsi.so 00:28:20.024 LIB libspdk_bdev_virtio.a 00:28:20.024 SO libspdk_bdev_virtio.so.6.0 00:28:20.024 SYMLINK libspdk_bdev_virtio.so 00:28:20.986 LIB libspdk_bdev_nvme.a 00:28:20.986 SO libspdk_bdev_nvme.so.7.1 00:28:20.986 SYMLINK libspdk_bdev_nvme.so 00:28:21.554 CC module/event/subsystems/fsdev/fsdev.o 00:28:21.554 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:28:21.554 CC module/event/subsystems/scheduler/scheduler.o 00:28:21.554 CC module/event/subsystems/iobuf/iobuf.o 00:28:21.554 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:28:21.554 CC module/event/subsystems/keyring/keyring.o 00:28:21.554 CC module/event/subsystems/sock/sock.o 00:28:21.554 CC module/event/subsystems/vmd/vmd.o 00:28:21.554 CC module/event/subsystems/vmd/vmd_rpc.o 00:28:21.554 LIB libspdk_event_fsdev.a 00:28:21.554 LIB libspdk_event_vhost_blk.a 00:28:21.554 LIB libspdk_event_sock.a 00:28:21.554 LIB libspdk_event_iobuf.a 00:28:21.813 LIB libspdk_event_scheduler.a 00:28:21.813 SO libspdk_event_vhost_blk.so.3.0 00:28:21.813 LIB libspdk_event_vmd.a 00:28:21.813 LIB libspdk_event_keyring.a 00:28:21.813 SO libspdk_event_fsdev.so.1.0 00:28:21.813 SO libspdk_event_sock.so.5.0 00:28:21.813 SO libspdk_event_iobuf.so.3.0 00:28:21.813 SO libspdk_event_scheduler.so.4.0 00:28:21.813 SO libspdk_event_keyring.so.1.0 00:28:21.813 SO libspdk_event_vmd.so.6.0 00:28:21.813 SYMLINK libspdk_event_vhost_blk.so 00:28:21.813 SYMLINK libspdk_event_sock.so 00:28:21.813 SYMLINK libspdk_event_fsdev.so 00:28:21.813 SYMLINK libspdk_event_scheduler.so 00:28:21.813 SYMLINK libspdk_event_keyring.so 00:28:21.813 SYMLINK libspdk_event_iobuf.so 00:28:21.813 SYMLINK libspdk_event_vmd.so 00:28:22.071 CC module/event/subsystems/accel/accel.o 00:28:22.346 LIB libspdk_event_accel.a 00:28:22.346 SO libspdk_event_accel.so.6.0 00:28:22.346 SYMLINK libspdk_event_accel.so 00:28:22.915 CC module/event/subsystems/bdev/bdev.o 00:28:22.915 LIB libspdk_event_bdev.a 00:28:22.915 SO libspdk_event_bdev.so.6.0 00:28:23.173 SYMLINK libspdk_event_bdev.so 00:28:23.431 CC module/event/subsystems/ublk/ublk.o 00:28:23.431 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:28:23.431 CC module/event/subsystems/scsi/scsi.o 00:28:23.431 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:28:23.431 CC module/event/subsystems/nbd/nbd.o 00:28:23.690 LIB libspdk_event_ublk.a 00:28:23.690 LIB libspdk_event_nbd.a 00:28:23.690 LIB libspdk_event_scsi.a 00:28:23.690 SO libspdk_event_ublk.so.3.0 00:28:23.690 SO libspdk_event_nbd.so.6.0 00:28:23.690 SO libspdk_event_scsi.so.6.0 00:28:23.690 SYMLINK libspdk_event_ublk.so 00:28:23.690 LIB libspdk_event_nvmf.a 00:28:23.690 SYMLINK libspdk_event_nbd.so 00:28:23.690 SYMLINK libspdk_event_scsi.so 00:28:23.690 SO libspdk_event_nvmf.so.6.0 00:28:23.690 SYMLINK libspdk_event_nvmf.so 00:28:24.257 CC module/event/subsystems/iscsi/iscsi.o 00:28:24.257 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:28:24.257 LIB libspdk_event_iscsi.a 00:28:24.257 LIB libspdk_event_vhost_scsi.a 00:28:24.257 SO libspdk_event_iscsi.so.6.0 00:28:24.257 SO libspdk_event_vhost_scsi.so.3.0 00:28:24.257 SYMLINK libspdk_event_iscsi.so 00:28:24.516 SYMLINK libspdk_event_vhost_scsi.so 00:28:24.516 SO libspdk.so.6.0 00:28:24.516 SYMLINK libspdk.so 00:28:24.774 CXX app/trace/trace.o 00:28:24.774 CC app/spdk_lspci/spdk_lspci.o 00:28:24.774 CC app/spdk_nvme_identify/identify.o 00:28:24.774 CC app/trace_record/trace_record.o 00:28:24.774 CC app/spdk_nvme_perf/perf.o 00:28:25.033 CC app/nvmf_tgt/nvmf_main.o 00:28:25.033 CC app/spdk_tgt/spdk_tgt.o 00:28:25.033 CC app/iscsi_tgt/iscsi_tgt.o 00:28:25.033 CC test/thread/poller_perf/poller_perf.o 00:28:25.033 CC examples/util/zipf/zipf.o 00:28:25.033 LINK spdk_lspci 00:28:25.033 LINK nvmf_tgt 00:28:25.033 LINK spdk_trace_record 00:28:25.033 LINK poller_perf 00:28:25.292 LINK zipf 00:28:25.292 LINK iscsi_tgt 00:28:25.292 LINK spdk_tgt 00:28:25.292 CC app/spdk_nvme_discover/discovery_aer.o 00:28:25.292 LINK spdk_trace 00:28:25.292 CC app/spdk_top/spdk_top.o 00:28:25.551 TEST_HEADER include/spdk/accel.h 00:28:25.551 TEST_HEADER include/spdk/accel_module.h 00:28:25.551 CC examples/ioat/verify/verify.o 00:28:25.551 TEST_HEADER include/spdk/assert.h 00:28:25.551 CC examples/ioat/perf/perf.o 00:28:25.551 TEST_HEADER include/spdk/barrier.h 00:28:25.551 TEST_HEADER include/spdk/base64.h 00:28:25.551 TEST_HEADER include/spdk/bdev.h 00:28:25.551 TEST_HEADER include/spdk/bdev_module.h 00:28:25.551 TEST_HEADER include/spdk/bdev_zone.h 00:28:25.551 TEST_HEADER include/spdk/bit_array.h 00:28:25.551 TEST_HEADER include/spdk/bit_pool.h 00:28:25.551 TEST_HEADER include/spdk/blob_bdev.h 00:28:25.551 TEST_HEADER include/spdk/blobfs_bdev.h 00:28:25.551 TEST_HEADER include/spdk/blobfs.h 00:28:25.551 TEST_HEADER include/spdk/blob.h 00:28:25.551 TEST_HEADER include/spdk/conf.h 00:28:25.551 CC test/dma/test_dma/test_dma.o 00:28:25.551 TEST_HEADER include/spdk/config.h 00:28:25.551 TEST_HEADER include/spdk/cpuset.h 00:28:25.551 TEST_HEADER include/spdk/crc16.h 00:28:25.551 TEST_HEADER include/spdk/crc32.h 00:28:25.551 LINK spdk_nvme_discover 00:28:25.551 TEST_HEADER include/spdk/crc64.h 00:28:25.551 TEST_HEADER include/spdk/dif.h 00:28:25.551 TEST_HEADER include/spdk/dma.h 00:28:25.551 TEST_HEADER include/spdk/endian.h 00:28:25.551 TEST_HEADER include/spdk/env_dpdk.h 00:28:25.551 TEST_HEADER include/spdk/env.h 00:28:25.551 TEST_HEADER include/spdk/event.h 00:28:25.551 TEST_HEADER include/spdk/fd_group.h 00:28:25.551 TEST_HEADER include/spdk/fd.h 00:28:25.551 TEST_HEADER include/spdk/file.h 00:28:25.551 TEST_HEADER include/spdk/fsdev.h 00:28:25.551 TEST_HEADER include/spdk/fsdev_module.h 00:28:25.551 TEST_HEADER include/spdk/ftl.h 00:28:25.552 TEST_HEADER include/spdk/fuse_dispatcher.h 00:28:25.552 TEST_HEADER include/spdk/gpt_spec.h 00:28:25.552 TEST_HEADER include/spdk/hexlify.h 00:28:25.552 TEST_HEADER include/spdk/histogram_data.h 00:28:25.552 TEST_HEADER include/spdk/idxd.h 00:28:25.552 TEST_HEADER include/spdk/idxd_spec.h 00:28:25.552 TEST_HEADER include/spdk/init.h 00:28:25.552 TEST_HEADER include/spdk/ioat.h 00:28:25.552 TEST_HEADER include/spdk/ioat_spec.h 00:28:25.552 TEST_HEADER include/spdk/iscsi_spec.h 00:28:25.552 CC test/app/bdev_svc/bdev_svc.o 00:28:25.552 TEST_HEADER include/spdk/json.h 00:28:25.552 TEST_HEADER include/spdk/jsonrpc.h 00:28:25.552 TEST_HEADER include/spdk/keyring.h 00:28:25.552 TEST_HEADER include/spdk/keyring_module.h 00:28:25.552 TEST_HEADER include/spdk/likely.h 00:28:25.552 TEST_HEADER include/spdk/log.h 00:28:25.552 TEST_HEADER include/spdk/lvol.h 00:28:25.552 TEST_HEADER include/spdk/md5.h 00:28:25.552 TEST_HEADER include/spdk/memory.h 00:28:25.552 TEST_HEADER include/spdk/mmio.h 00:28:25.552 TEST_HEADER include/spdk/nbd.h 00:28:25.552 TEST_HEADER include/spdk/net.h 00:28:25.552 TEST_HEADER include/spdk/notify.h 00:28:25.552 TEST_HEADER include/spdk/nvme.h 00:28:25.552 TEST_HEADER include/spdk/nvme_intel.h 00:28:25.552 TEST_HEADER include/spdk/nvme_ocssd.h 00:28:25.552 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:28:25.552 TEST_HEADER include/spdk/nvme_spec.h 00:28:25.552 TEST_HEADER include/spdk/nvme_zns.h 00:28:25.552 TEST_HEADER include/spdk/nvmf_cmd.h 00:28:25.552 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:28:25.552 TEST_HEADER include/spdk/nvmf.h 00:28:25.552 TEST_HEADER include/spdk/nvmf_spec.h 00:28:25.552 TEST_HEADER include/spdk/nvmf_transport.h 00:28:25.552 TEST_HEADER include/spdk/opal.h 00:28:25.552 TEST_HEADER include/spdk/opal_spec.h 00:28:25.552 TEST_HEADER include/spdk/pci_ids.h 00:28:25.552 TEST_HEADER include/spdk/pipe.h 00:28:25.552 TEST_HEADER include/spdk/queue.h 00:28:25.552 TEST_HEADER include/spdk/reduce.h 00:28:25.552 TEST_HEADER include/spdk/rpc.h 00:28:25.552 TEST_HEADER include/spdk/scheduler.h 00:28:25.552 TEST_HEADER include/spdk/scsi.h 00:28:25.552 LINK spdk_nvme_identify 00:28:25.552 TEST_HEADER include/spdk/scsi_spec.h 00:28:25.552 TEST_HEADER include/spdk/sock.h 00:28:25.811 TEST_HEADER include/spdk/stdinc.h 00:28:25.811 TEST_HEADER include/spdk/string.h 00:28:25.811 TEST_HEADER include/spdk/thread.h 00:28:25.811 TEST_HEADER include/spdk/trace.h 00:28:25.811 TEST_HEADER include/spdk/trace_parser.h 00:28:25.811 TEST_HEADER include/spdk/tree.h 00:28:25.811 TEST_HEADER include/spdk/ublk.h 00:28:25.811 TEST_HEADER include/spdk/util.h 00:28:25.811 TEST_HEADER include/spdk/uuid.h 00:28:25.811 LINK verify 00:28:25.811 TEST_HEADER include/spdk/version.h 00:28:25.811 TEST_HEADER include/spdk/vfio_user_pci.h 00:28:25.811 TEST_HEADER include/spdk/vfio_user_spec.h 00:28:25.811 TEST_HEADER include/spdk/vhost.h 00:28:25.811 LINK ioat_perf 00:28:25.811 TEST_HEADER include/spdk/vmd.h 00:28:25.811 TEST_HEADER include/spdk/xor.h 00:28:25.811 TEST_HEADER include/spdk/zipf.h 00:28:25.811 CXX test/cpp_headers/accel.o 00:28:25.811 LINK spdk_nvme_perf 00:28:25.811 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:28:25.811 LINK bdev_svc 00:28:25.811 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:28:25.811 CXX test/cpp_headers/accel_module.o 00:28:26.070 CC test/app/histogram_perf/histogram_perf.o 00:28:26.070 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:28:26.070 CC examples/vmd/lsvmd/lsvmd.o 00:28:26.070 CXX test/cpp_headers/assert.o 00:28:26.070 LINK test_dma 00:28:26.070 CC app/spdk_dd/spdk_dd.o 00:28:26.070 CC test/app/jsoncat/jsoncat.o 00:28:26.070 LINK nvme_fuzz 00:28:26.070 LINK histogram_perf 00:28:26.070 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:28:26.328 CXX test/cpp_headers/barrier.o 00:28:26.328 LINK lsvmd 00:28:26.328 LINK jsoncat 00:28:26.328 LINK spdk_top 00:28:26.328 CXX test/cpp_headers/base64.o 00:28:26.328 CXX test/cpp_headers/bdev.o 00:28:26.328 CXX test/cpp_headers/bdev_module.o 00:28:26.328 CXX test/cpp_headers/bdev_zone.o 00:28:26.587 CXX test/cpp_headers/bit_array.o 00:28:26.588 LINK spdk_dd 00:28:26.588 CC examples/vmd/led/led.o 00:28:26.588 CXX test/cpp_headers/bit_pool.o 00:28:26.588 CC app/vhost/vhost.o 00:28:26.588 LINK vhost_fuzz 00:28:26.588 CC app/fio/nvme/fio_plugin.o 00:28:26.588 CC app/fio/bdev/fio_plugin.o 00:28:26.588 LINK led 00:28:26.588 CXX test/cpp_headers/blob_bdev.o 00:28:26.588 CXX test/cpp_headers/blobfs_bdev.o 00:28:26.847 LINK vhost 00:28:26.847 CC test/app/stub/stub.o 00:28:26.847 CC examples/idxd/perf/perf.o 00:28:26.847 CXX test/cpp_headers/blobfs.o 00:28:26.847 CC test/env/mem_callbacks/mem_callbacks.o 00:28:26.847 LINK stub 00:28:27.106 CXX test/cpp_headers/blob.o 00:28:27.106 CC examples/interrupt_tgt/interrupt_tgt.o 00:28:27.106 CC test/event/event_perf/event_perf.o 00:28:27.106 LINK spdk_nvme 00:28:27.106 LINK spdk_bdev 00:28:27.106 LINK idxd_perf 00:28:27.106 CC examples/thread/thread/thread_ex.o 00:28:27.106 CXX test/cpp_headers/conf.o 00:28:27.106 LINK event_perf 00:28:27.364 LINK interrupt_tgt 00:28:27.364 CC test/event/reactor/reactor.o 00:28:27.364 CC test/event/reactor_perf/reactor_perf.o 00:28:27.364 CC test/event/app_repeat/app_repeat.o 00:28:27.364 CXX test/cpp_headers/config.o 00:28:27.364 CXX test/cpp_headers/cpuset.o 00:28:27.364 LINK reactor 00:28:27.364 LINK reactor_perf 00:28:27.364 LINK thread 00:28:27.364 CXX test/cpp_headers/crc16.o 00:28:27.364 LINK iscsi_fuzz 00:28:27.364 CC test/event/scheduler/scheduler.o 00:28:27.364 LINK mem_callbacks 00:28:27.623 LINK app_repeat 00:28:27.623 CC examples/sock/hello_world/hello_sock.o 00:28:27.623 CXX test/cpp_headers/crc32.o 00:28:27.623 CC test/env/vtophys/vtophys.o 00:28:27.623 CC test/rpc_client/rpc_client_test.o 00:28:27.623 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:28:27.623 CC test/env/memory/memory_ut.o 00:28:27.623 LINK scheduler 00:28:27.623 CC test/nvme/aer/aer.o 00:28:27.623 CXX test/cpp_headers/crc64.o 00:28:27.883 LINK hello_sock 00:28:27.883 LINK vtophys 00:28:27.883 CC test/accel/dif/dif.o 00:28:27.883 LINK rpc_client_test 00:28:27.883 CC test/blobfs/mkfs/mkfs.o 00:28:27.883 CXX test/cpp_headers/dif.o 00:28:27.883 CXX test/cpp_headers/dma.o 00:28:27.883 LINK env_dpdk_post_init 00:28:27.883 CXX test/cpp_headers/endian.o 00:28:27.883 LINK aer 00:28:28.143 CC examples/accel/perf/accel_perf.o 00:28:28.143 LINK mkfs 00:28:28.143 CXX test/cpp_headers/env_dpdk.o 00:28:28.143 CC test/env/pci/pci_ut.o 00:28:28.143 CXX test/cpp_headers/env.o 00:28:28.143 CC examples/nvme/hello_world/hello_world.o 00:28:28.143 CC test/lvol/esnap/esnap.o 00:28:28.143 CC examples/blob/hello_world/hello_blob.o 00:28:28.143 CC test/nvme/reset/reset.o 00:28:28.402 CXX test/cpp_headers/event.o 00:28:28.402 LINK dif 00:28:28.402 CXX test/cpp_headers/fd_group.o 00:28:28.402 LINK hello_world 00:28:28.402 LINK hello_blob 00:28:28.402 LINK reset 00:28:28.402 LINK pci_ut 00:28:28.402 LINK accel_perf 00:28:28.662 CC examples/blob/cli/blobcli.o 00:28:28.662 CXX test/cpp_headers/fd.o 00:28:28.662 CC examples/nvme/reconnect/reconnect.o 00:28:28.662 CC test/nvme/sgl/sgl.o 00:28:28.662 CC examples/nvme/nvme_manage/nvme_manage.o 00:28:28.662 CXX test/cpp_headers/file.o 00:28:28.662 CC examples/nvme/hotplug/hotplug.o 00:28:28.662 LINK memory_ut 00:28:28.662 CC examples/nvme/arbitration/arbitration.o 00:28:28.921 CC examples/nvme/cmb_copy/cmb_copy.o 00:28:28.921 CXX test/cpp_headers/fsdev.o 00:28:28.921 CXX test/cpp_headers/fsdev_module.o 00:28:28.921 LINK hotplug 00:28:28.921 LINK blobcli 00:28:28.921 LINK sgl 00:28:29.180 LINK cmb_copy 00:28:29.180 LINK reconnect 00:28:29.180 LINK arbitration 00:28:29.180 CXX test/cpp_headers/ftl.o 00:28:29.180 LINK nvme_manage 00:28:29.180 CC test/bdev/bdevio/bdevio.o 00:28:29.180 CC examples/nvme/abort/abort.o 00:28:29.180 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:28:29.180 CC test/nvme/e2edp/nvme_dp.o 00:28:29.439 CXX test/cpp_headers/fuse_dispatcher.o 00:28:29.439 CC test/nvme/overhead/overhead.o 00:28:29.439 CC test/nvme/err_injection/err_injection.o 00:28:29.439 CC examples/fsdev/hello_world/hello_fsdev.o 00:28:29.439 LINK pmr_persistence 00:28:29.439 CC test/nvme/startup/startup.o 00:28:29.439 CXX test/cpp_headers/gpt_spec.o 00:28:29.439 LINK nvme_dp 00:28:29.439 LINK err_injection 00:28:29.697 LINK overhead 00:28:29.697 LINK abort 00:28:29.697 CXX test/cpp_headers/hexlify.o 00:28:29.697 LINK bdevio 00:28:29.697 LINK startup 00:28:29.697 CC test/nvme/reserve/reserve.o 00:28:29.697 LINK hello_fsdev 00:28:29.697 CC test/nvme/simple_copy/simple_copy.o 00:28:29.697 CXX test/cpp_headers/histogram_data.o 00:28:29.956 CC test/nvme/connect_stress/connect_stress.o 00:28:29.956 CXX test/cpp_headers/idxd.o 00:28:29.956 CC test/nvme/boot_partition/boot_partition.o 00:28:29.956 LINK reserve 00:28:29.956 CC test/nvme/compliance/nvme_compliance.o 00:28:29.956 LINK simple_copy 00:28:29.956 LINK connect_stress 00:28:29.956 CC examples/bdev/hello_world/hello_bdev.o 00:28:29.956 LINK boot_partition 00:28:29.956 CXX test/cpp_headers/idxd_spec.o 00:28:29.956 CC examples/bdev/bdevperf/bdevperf.o 00:28:29.956 CC test/nvme/fused_ordering/fused_ordering.o 00:28:30.215 CC test/nvme/doorbell_aers/doorbell_aers.o 00:28:30.215 CXX test/cpp_headers/init.o 00:28:30.215 CXX test/cpp_headers/ioat.o 00:28:30.215 LINK nvme_compliance 00:28:30.215 LINK fused_ordering 00:28:30.215 LINK hello_bdev 00:28:30.215 CC test/nvme/fdp/fdp.o 00:28:30.215 CC test/nvme/cuse/cuse.o 00:28:30.475 CXX test/cpp_headers/ioat_spec.o 00:28:30.475 LINK doorbell_aers 00:28:30.475 CXX test/cpp_headers/iscsi_spec.o 00:28:30.475 CXX test/cpp_headers/json.o 00:28:30.475 CXX test/cpp_headers/jsonrpc.o 00:28:30.475 CXX test/cpp_headers/keyring.o 00:28:30.475 CXX test/cpp_headers/keyring_module.o 00:28:30.475 CXX test/cpp_headers/likely.o 00:28:30.475 CXX test/cpp_headers/log.o 00:28:30.475 CXX test/cpp_headers/lvol.o 00:28:30.475 LINK fdp 00:28:30.734 CXX test/cpp_headers/md5.o 00:28:30.734 CXX test/cpp_headers/memory.o 00:28:30.734 CXX test/cpp_headers/mmio.o 00:28:30.734 CXX test/cpp_headers/nbd.o 00:28:30.734 CXX test/cpp_headers/net.o 00:28:30.734 CXX test/cpp_headers/notify.o 00:28:30.734 CXX test/cpp_headers/nvme.o 00:28:30.734 CXX test/cpp_headers/nvme_intel.o 00:28:30.734 CXX test/cpp_headers/nvme_ocssd.o 00:28:30.734 LINK bdevperf 00:28:30.734 CXX test/cpp_headers/nvme_ocssd_spec.o 00:28:30.734 CXX test/cpp_headers/nvme_spec.o 00:28:31.084 CXX test/cpp_headers/nvme_zns.o 00:28:31.084 CXX test/cpp_headers/nvmf_cmd.o 00:28:31.084 CXX test/cpp_headers/nvmf_fc_spec.o 00:28:31.084 CXX test/cpp_headers/nvmf.o 00:28:31.084 CXX test/cpp_headers/nvmf_spec.o 00:28:31.084 CXX test/cpp_headers/nvmf_transport.o 00:28:31.084 CXX test/cpp_headers/opal.o 00:28:31.084 CXX test/cpp_headers/opal_spec.o 00:28:31.084 CXX test/cpp_headers/pci_ids.o 00:28:31.084 CXX test/cpp_headers/pipe.o 00:28:31.084 CXX test/cpp_headers/queue.o 00:28:31.084 CXX test/cpp_headers/reduce.o 00:28:31.084 CXX test/cpp_headers/rpc.o 00:28:31.084 CXX test/cpp_headers/scheduler.o 00:28:31.343 CXX test/cpp_headers/scsi.o 00:28:31.343 CXX test/cpp_headers/scsi_spec.o 00:28:31.343 CXX test/cpp_headers/sock.o 00:28:31.343 CC examples/nvmf/nvmf/nvmf.o 00:28:31.343 CXX test/cpp_headers/stdinc.o 00:28:31.343 CXX test/cpp_headers/string.o 00:28:31.343 CXX test/cpp_headers/thread.o 00:28:31.343 CXX test/cpp_headers/trace.o 00:28:31.343 CXX test/cpp_headers/trace_parser.o 00:28:31.343 CXX test/cpp_headers/tree.o 00:28:31.343 CXX test/cpp_headers/ublk.o 00:28:31.343 CXX test/cpp_headers/util.o 00:28:31.343 CXX test/cpp_headers/uuid.o 00:28:31.602 CXX test/cpp_headers/version.o 00:28:31.602 CXX test/cpp_headers/vfio_user_pci.o 00:28:31.602 CXX test/cpp_headers/vfio_user_spec.o 00:28:31.602 LINK cuse 00:28:31.602 CXX test/cpp_headers/vhost.o 00:28:31.602 CXX test/cpp_headers/vmd.o 00:28:31.602 CXX test/cpp_headers/xor.o 00:28:31.602 CXX test/cpp_headers/zipf.o 00:28:31.602 LINK nvmf 00:28:32.978 LINK esnap 00:28:33.548 00:28:33.549 real 1m26.224s 00:28:33.549 user 7m22.480s 00:28:33.549 sys 1m37.656s 00:28:33.549 13:49:30 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:28:33.549 13:49:30 make -- common/autotest_common.sh@10 -- $ set +x 00:28:33.549 ************************************ 00:28:33.549 END TEST make 00:28:33.549 ************************************ 00:28:33.549 13:49:30 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:28:33.549 13:49:30 -- pm/common@29 -- $ signal_monitor_resources TERM 00:28:33.549 13:49:30 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:28:33.549 13:49:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:33.549 13:49:30 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:28:33.549 13:49:30 -- pm/common@44 -- $ pid=5467 00:28:33.549 13:49:30 -- pm/common@50 -- $ kill -TERM 5467 00:28:33.549 13:49:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:33.549 13:49:30 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:28:33.549 13:49:30 -- pm/common@44 -- $ pid=5469 00:28:33.549 13:49:30 -- pm/common@50 -- $ kill -TERM 5469 00:28:33.549 13:49:30 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:28:33.549 13:49:30 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:28:33.549 13:49:30 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:33.549 13:49:30 -- common/autotest_common.sh@1693 -- # lcov --version 00:28:33.549 13:49:30 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:33.549 13:49:30 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:33.549 13:49:30 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:33.549 13:49:30 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:33.549 13:49:30 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:33.549 13:49:30 -- scripts/common.sh@336 -- # IFS=.-: 00:28:33.549 13:49:30 -- scripts/common.sh@336 -- # read -ra ver1 00:28:33.549 13:49:30 -- scripts/common.sh@337 -- # IFS=.-: 00:28:33.549 13:49:30 -- scripts/common.sh@337 -- # read -ra ver2 00:28:33.549 13:49:30 -- scripts/common.sh@338 -- # local 'op=<' 00:28:33.549 13:49:30 -- scripts/common.sh@340 -- # ver1_l=2 00:28:33.549 13:49:30 -- scripts/common.sh@341 -- # ver2_l=1 00:28:33.549 13:49:30 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:33.549 13:49:30 -- scripts/common.sh@344 -- # case "$op" in 00:28:33.549 13:49:30 -- scripts/common.sh@345 -- # : 1 00:28:33.549 13:49:30 -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:33.549 13:49:30 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:33.549 13:49:30 -- scripts/common.sh@365 -- # decimal 1 00:28:33.549 13:49:30 -- scripts/common.sh@353 -- # local d=1 00:28:33.549 13:49:30 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:33.549 13:49:30 -- scripts/common.sh@355 -- # echo 1 00:28:33.549 13:49:30 -- scripts/common.sh@365 -- # ver1[v]=1 00:28:33.549 13:49:30 -- scripts/common.sh@366 -- # decimal 2 00:28:33.549 13:49:30 -- scripts/common.sh@353 -- # local d=2 00:28:33.549 13:49:30 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:33.549 13:49:30 -- scripts/common.sh@355 -- # echo 2 00:28:33.549 13:49:30 -- scripts/common.sh@366 -- # ver2[v]=2 00:28:33.549 13:49:30 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:33.549 13:49:30 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:33.549 13:49:30 -- scripts/common.sh@368 -- # return 0 00:28:33.549 13:49:30 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:33.549 13:49:30 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:33.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.549 --rc genhtml_branch_coverage=1 00:28:33.549 --rc genhtml_function_coverage=1 00:28:33.549 --rc genhtml_legend=1 00:28:33.549 --rc geninfo_all_blocks=1 00:28:33.549 --rc geninfo_unexecuted_blocks=1 00:28:33.549 00:28:33.549 ' 00:28:33.549 13:49:30 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:33.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.549 --rc genhtml_branch_coverage=1 00:28:33.549 --rc genhtml_function_coverage=1 00:28:33.549 --rc genhtml_legend=1 00:28:33.549 --rc geninfo_all_blocks=1 00:28:33.549 --rc geninfo_unexecuted_blocks=1 00:28:33.549 00:28:33.549 ' 00:28:33.549 13:49:30 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:33.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.549 --rc genhtml_branch_coverage=1 00:28:33.549 --rc genhtml_function_coverage=1 00:28:33.549 --rc genhtml_legend=1 00:28:33.549 --rc geninfo_all_blocks=1 00:28:33.549 --rc geninfo_unexecuted_blocks=1 00:28:33.549 00:28:33.549 ' 00:28:33.549 13:49:30 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:33.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.549 --rc genhtml_branch_coverage=1 00:28:33.549 --rc genhtml_function_coverage=1 00:28:33.549 --rc genhtml_legend=1 00:28:33.549 --rc geninfo_all_blocks=1 00:28:33.549 --rc geninfo_unexecuted_blocks=1 00:28:33.549 00:28:33.549 ' 00:28:33.549 13:49:30 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:33.549 13:49:30 -- nvmf/common.sh@7 -- # uname -s 00:28:33.549 13:49:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:33.549 13:49:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:33.549 13:49:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:33.549 13:49:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:33.549 13:49:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:33.549 13:49:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:33.549 13:49:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:33.549 13:49:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:33.549 13:49:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:33.549 13:49:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:33.549 13:49:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:28:33.549 13:49:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=105ec898-1662-46bd-85be-b241e399edb9 00:28:33.549 13:49:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:33.549 13:49:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:33.549 13:49:30 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:33.549 13:49:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:33.549 13:49:30 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:33.549 13:49:30 -- scripts/common.sh@15 -- # shopt -s extglob 00:28:33.549 13:49:30 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:33.549 13:49:30 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:33.549 13:49:30 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:33.549 13:49:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.549 13:49:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.549 13:49:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.549 13:49:30 -- paths/export.sh@5 -- # export PATH 00:28:33.549 13:49:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.549 13:49:30 -- nvmf/common.sh@51 -- # : 0 00:28:33.549 13:49:30 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:33.549 13:49:30 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:33.549 13:49:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:33.549 13:49:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:33.549 13:49:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:33.549 13:49:30 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:33.549 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:33.549 13:49:30 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:33.549 13:49:30 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:33.549 13:49:30 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:33.549 13:49:30 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:28:33.549 13:49:30 -- spdk/autotest.sh@32 -- # uname -s 00:28:33.549 13:49:30 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:28:33.549 13:49:30 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:28:33.810 13:49:30 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:28:33.810 13:49:30 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:28:33.810 13:49:30 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:28:33.810 13:49:30 -- spdk/autotest.sh@44 -- # modprobe nbd 00:28:33.810 13:49:30 -- spdk/autotest.sh@46 -- # type -P udevadm 00:28:33.810 13:49:30 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:28:33.810 13:49:30 -- spdk/autotest.sh@48 -- # udevadm_pid=54576 00:28:33.810 13:49:30 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:28:33.810 13:49:30 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:28:33.810 13:49:30 -- pm/common@17 -- # local monitor 00:28:33.810 13:49:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:28:33.810 13:49:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:28:33.810 13:49:30 -- pm/common@25 -- # sleep 1 00:28:33.810 13:49:30 -- pm/common@21 -- # date +%s 00:28:33.810 13:49:30 -- pm/common@21 -- # date +%s 00:28:33.810 13:49:30 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732110570 00:28:33.810 13:49:30 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732110570 00:28:33.810 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732110570_collect-vmstat.pm.log 00:28:33.810 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732110570_collect-cpu-load.pm.log 00:28:34.747 13:49:31 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:28:34.747 13:49:31 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:28:34.747 13:49:31 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:34.747 13:49:31 -- common/autotest_common.sh@10 -- # set +x 00:28:34.747 13:49:31 -- spdk/autotest.sh@59 -- # create_test_list 00:28:34.747 13:49:31 -- common/autotest_common.sh@752 -- # xtrace_disable 00:28:34.747 13:49:31 -- common/autotest_common.sh@10 -- # set +x 00:28:34.747 13:49:32 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:28:34.747 13:49:32 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:28:34.747 13:49:32 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:28:34.747 13:49:32 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:28:34.747 13:49:32 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:28:34.747 13:49:32 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:28:34.747 13:49:32 -- common/autotest_common.sh@1457 -- # uname 00:28:34.747 13:49:32 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:28:34.747 13:49:32 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:28:34.747 13:49:32 -- common/autotest_common.sh@1477 -- # uname 00:28:34.747 13:49:32 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:28:34.747 13:49:32 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:28:34.747 13:49:32 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:28:35.011 lcov: LCOV version 1.15 00:28:35.011 13:49:32 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:28:49.911 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:28:49.911 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:29:08.009 13:50:03 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:29:08.009 13:50:03 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:08.009 13:50:03 -- common/autotest_common.sh@10 -- # set +x 00:29:08.009 13:50:03 -- spdk/autotest.sh@78 -- # rm -f 00:29:08.009 13:50:03 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:29:08.009 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:08.009 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:29:08.009 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:29:08.009 13:50:04 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:29:08.009 13:50:04 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:29:08.009 13:50:04 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:29:08.009 13:50:04 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:29:08.009 13:50:04 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:29:08.009 13:50:04 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:29:08.010 13:50:04 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:29:08.010 13:50:04 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:08.010 13:50:04 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:29:08.010 13:50:04 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:29:08.010 13:50:04 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:29:08.010 13:50:04 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:29:08.010 13:50:04 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:29:08.010 13:50:04 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:29:08.010 13:50:04 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:29:08.010 13:50:04 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:29:08.010 13:50:04 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:29:08.010 13:50:04 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:29:08.010 13:50:04 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:29:08.010 13:50:04 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:29:08.010 13:50:04 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:29:08.010 13:50:04 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:29:08.010 13:50:04 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:29:08.010 13:50:04 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:29:08.010 13:50:04 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:29:08.010 13:50:04 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:29:08.010 13:50:04 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:29:08.010 13:50:04 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:29:08.010 13:50:04 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:29:08.010 13:50:04 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:29:08.010 No valid GPT data, bailing 00:29:08.010 13:50:04 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:08.010 13:50:04 -- scripts/common.sh@394 -- # pt= 00:29:08.010 13:50:04 -- scripts/common.sh@395 -- # return 1 00:29:08.010 13:50:04 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:29:08.010 1+0 records in 00:29:08.010 1+0 records out 00:29:08.010 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00613774 s, 171 MB/s 00:29:08.010 13:50:04 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:29:08.010 13:50:04 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:29:08.010 13:50:04 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:29:08.010 13:50:04 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:29:08.010 13:50:04 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:29:08.010 No valid GPT data, bailing 00:29:08.010 13:50:04 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:29:08.010 13:50:04 -- scripts/common.sh@394 -- # pt= 00:29:08.010 13:50:04 -- scripts/common.sh@395 -- # return 1 00:29:08.010 13:50:04 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:29:08.010 1+0 records in 00:29:08.010 1+0 records out 00:29:08.010 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00680095 s, 154 MB/s 00:29:08.010 13:50:04 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:29:08.010 13:50:04 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:29:08.010 13:50:04 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:29:08.010 13:50:04 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:29:08.010 13:50:04 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:29:08.010 No valid GPT data, bailing 00:29:08.010 13:50:04 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:29:08.010 13:50:04 -- scripts/common.sh@394 -- # pt= 00:29:08.010 13:50:04 -- scripts/common.sh@395 -- # return 1 00:29:08.010 13:50:04 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:29:08.010 1+0 records in 00:29:08.010 1+0 records out 00:29:08.010 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00617457 s, 170 MB/s 00:29:08.010 13:50:04 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:29:08.010 13:50:04 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:29:08.010 13:50:04 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:29:08.010 13:50:04 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:29:08.010 13:50:04 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:29:08.010 No valid GPT data, bailing 00:29:08.010 13:50:04 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:29:08.010 13:50:04 -- scripts/common.sh@394 -- # pt= 00:29:08.010 13:50:04 -- scripts/common.sh@395 -- # return 1 00:29:08.010 13:50:04 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:29:08.010 1+0 records in 00:29:08.010 1+0 records out 00:29:08.010 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00580231 s, 181 MB/s 00:29:08.010 13:50:04 -- spdk/autotest.sh@105 -- # sync 00:29:08.010 13:50:04 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:29:08.010 13:50:04 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:29:08.010 13:50:04 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:29:10.549 13:50:07 -- spdk/autotest.sh@111 -- # uname -s 00:29:10.549 13:50:07 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:29:10.549 13:50:07 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:29:10.549 13:50:07 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:29:11.177 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:11.177 Hugepages 00:29:11.177 node hugesize free / total 00:29:11.177 node0 1048576kB 0 / 0 00:29:11.177 node0 2048kB 0 / 0 00:29:11.177 00:29:11.177 Type BDF Vendor Device NUMA Driver Device Block devices 00:29:11.436 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:29:11.436 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:29:11.436 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:29:11.694 13:50:08 -- spdk/autotest.sh@117 -- # uname -s 00:29:11.694 13:50:08 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:29:11.694 13:50:08 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:29:11.694 13:50:08 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:12.263 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:12.523 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:29:12.523 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:29:12.523 13:50:09 -- common/autotest_common.sh@1517 -- # sleep 1 00:29:13.460 13:50:10 -- common/autotest_common.sh@1518 -- # bdfs=() 00:29:13.460 13:50:10 -- common/autotest_common.sh@1518 -- # local bdfs 00:29:13.460 13:50:10 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:29:13.460 13:50:10 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:29:13.460 13:50:10 -- common/autotest_common.sh@1498 -- # bdfs=() 00:29:13.460 13:50:10 -- common/autotest_common.sh@1498 -- # local bdfs 00:29:13.460 13:50:10 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:13.460 13:50:10 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:13.460 13:50:10 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:29:13.721 13:50:10 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:29:13.721 13:50:10 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:29:13.721 13:50:10 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:29:13.981 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:14.240 Waiting for block devices as requested 00:29:14.240 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:29:14.240 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:29:14.500 13:50:11 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:29:14.500 13:50:11 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:29:14.500 13:50:11 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:29:14.500 13:50:11 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:29:14.500 13:50:11 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:29:14.500 13:50:11 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:29:14.500 13:50:11 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:29:14.500 13:50:11 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:29:14.500 13:50:11 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:29:14.500 13:50:11 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:29:14.500 13:50:11 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:29:14.500 13:50:11 -- common/autotest_common.sh@1531 -- # grep oacs 00:29:14.500 13:50:11 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:29:14.500 13:50:11 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:29:14.500 13:50:11 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:29:14.500 13:50:11 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:29:14.500 13:50:11 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:29:14.500 13:50:11 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:29:14.500 13:50:11 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:29:14.500 13:50:11 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:29:14.500 13:50:11 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:29:14.500 13:50:11 -- common/autotest_common.sh@1543 -- # continue 00:29:14.500 13:50:11 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:29:14.500 13:50:11 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:29:14.500 13:50:11 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:29:14.500 13:50:11 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:29:14.500 13:50:11 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:29:14.500 13:50:11 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:29:14.500 13:50:11 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:29:14.500 13:50:11 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:29:14.500 13:50:11 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:29:14.500 13:50:11 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:29:14.500 13:50:11 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:29:14.500 13:50:11 -- common/autotest_common.sh@1531 -- # grep oacs 00:29:14.500 13:50:11 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:29:14.500 13:50:11 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:29:14.500 13:50:11 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:29:14.500 13:50:11 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:29:14.500 13:50:11 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:29:14.500 13:50:11 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:29:14.500 13:50:11 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:29:14.500 13:50:11 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:29:14.500 13:50:11 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:29:14.500 13:50:11 -- common/autotest_common.sh@1543 -- # continue 00:29:14.500 13:50:11 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:29:14.500 13:50:11 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:14.500 13:50:11 -- common/autotest_common.sh@10 -- # set +x 00:29:14.500 13:50:11 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:29:14.500 13:50:11 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:14.500 13:50:11 -- common/autotest_common.sh@10 -- # set +x 00:29:14.500 13:50:11 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:15.437 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:15.437 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:29:15.437 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:29:15.437 13:50:12 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:29:15.437 13:50:12 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:15.437 13:50:12 -- common/autotest_common.sh@10 -- # set +x 00:29:15.437 13:50:12 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:29:15.437 13:50:12 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:29:15.437 13:50:12 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:29:15.437 13:50:12 -- common/autotest_common.sh@1563 -- # bdfs=() 00:29:15.437 13:50:12 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:29:15.437 13:50:12 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:29:15.437 13:50:12 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:29:15.437 13:50:12 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:29:15.437 13:50:12 -- common/autotest_common.sh@1498 -- # bdfs=() 00:29:15.437 13:50:12 -- common/autotest_common.sh@1498 -- # local bdfs 00:29:15.437 13:50:12 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:15.437 13:50:12 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:15.437 13:50:12 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:29:15.697 13:50:12 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:29:15.697 13:50:12 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:29:15.697 13:50:12 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:29:15.697 13:50:12 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:29:15.697 13:50:12 -- common/autotest_common.sh@1566 -- # device=0x0010 00:29:15.697 13:50:12 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:29:15.697 13:50:12 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:29:15.697 13:50:12 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:29:15.697 13:50:12 -- common/autotest_common.sh@1566 -- # device=0x0010 00:29:15.697 13:50:12 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:29:15.697 13:50:12 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:29:15.697 13:50:12 -- common/autotest_common.sh@1572 -- # return 0 00:29:15.697 13:50:12 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:29:15.697 13:50:12 -- common/autotest_common.sh@1580 -- # return 0 00:29:15.697 13:50:12 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:29:15.697 13:50:12 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:29:15.698 13:50:12 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:29:15.698 13:50:12 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:29:15.698 13:50:12 -- spdk/autotest.sh@149 -- # timing_enter lib 00:29:15.698 13:50:12 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:15.698 13:50:12 -- common/autotest_common.sh@10 -- # set +x 00:29:15.698 13:50:12 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:29:15.698 13:50:12 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:29:15.698 13:50:12 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:29:15.698 13:50:12 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:29:15.698 13:50:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:15.698 13:50:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:15.698 13:50:12 -- common/autotest_common.sh@10 -- # set +x 00:29:15.698 ************************************ 00:29:15.698 START TEST env 00:29:15.698 ************************************ 00:29:15.698 13:50:12 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:29:15.698 * Looking for test storage... 00:29:15.698 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:29:15.698 13:50:12 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:15.698 13:50:12 env -- common/autotest_common.sh@1693 -- # lcov --version 00:29:15.698 13:50:12 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:15.957 13:50:13 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:15.957 13:50:13 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:15.957 13:50:13 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:15.957 13:50:13 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:15.957 13:50:13 env -- scripts/common.sh@336 -- # IFS=.-: 00:29:15.957 13:50:13 env -- scripts/common.sh@336 -- # read -ra ver1 00:29:15.957 13:50:13 env -- scripts/common.sh@337 -- # IFS=.-: 00:29:15.957 13:50:13 env -- scripts/common.sh@337 -- # read -ra ver2 00:29:15.957 13:50:13 env -- scripts/common.sh@338 -- # local 'op=<' 00:29:15.957 13:50:13 env -- scripts/common.sh@340 -- # ver1_l=2 00:29:15.957 13:50:13 env -- scripts/common.sh@341 -- # ver2_l=1 00:29:15.957 13:50:13 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:15.957 13:50:13 env -- scripts/common.sh@344 -- # case "$op" in 00:29:15.958 13:50:13 env -- scripts/common.sh@345 -- # : 1 00:29:15.958 13:50:13 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:15.958 13:50:13 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:15.958 13:50:13 env -- scripts/common.sh@365 -- # decimal 1 00:29:15.958 13:50:13 env -- scripts/common.sh@353 -- # local d=1 00:29:15.958 13:50:13 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:15.958 13:50:13 env -- scripts/common.sh@355 -- # echo 1 00:29:15.958 13:50:13 env -- scripts/common.sh@365 -- # ver1[v]=1 00:29:15.958 13:50:13 env -- scripts/common.sh@366 -- # decimal 2 00:29:15.958 13:50:13 env -- scripts/common.sh@353 -- # local d=2 00:29:15.958 13:50:13 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:15.958 13:50:13 env -- scripts/common.sh@355 -- # echo 2 00:29:15.958 13:50:13 env -- scripts/common.sh@366 -- # ver2[v]=2 00:29:15.958 13:50:13 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:15.958 13:50:13 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:15.958 13:50:13 env -- scripts/common.sh@368 -- # return 0 00:29:15.958 13:50:13 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:15.958 13:50:13 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:15.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.958 --rc genhtml_branch_coverage=1 00:29:15.958 --rc genhtml_function_coverage=1 00:29:15.958 --rc genhtml_legend=1 00:29:15.958 --rc geninfo_all_blocks=1 00:29:15.958 --rc geninfo_unexecuted_blocks=1 00:29:15.958 00:29:15.958 ' 00:29:15.958 13:50:13 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:15.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.958 --rc genhtml_branch_coverage=1 00:29:15.958 --rc genhtml_function_coverage=1 00:29:15.958 --rc genhtml_legend=1 00:29:15.958 --rc geninfo_all_blocks=1 00:29:15.958 --rc geninfo_unexecuted_blocks=1 00:29:15.958 00:29:15.958 ' 00:29:15.958 13:50:13 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:15.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.958 --rc genhtml_branch_coverage=1 00:29:15.958 --rc genhtml_function_coverage=1 00:29:15.958 --rc genhtml_legend=1 00:29:15.958 --rc geninfo_all_blocks=1 00:29:15.958 --rc geninfo_unexecuted_blocks=1 00:29:15.958 00:29:15.958 ' 00:29:15.958 13:50:13 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:15.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.958 --rc genhtml_branch_coverage=1 00:29:15.958 --rc genhtml_function_coverage=1 00:29:15.958 --rc genhtml_legend=1 00:29:15.958 --rc geninfo_all_blocks=1 00:29:15.958 --rc geninfo_unexecuted_blocks=1 00:29:15.958 00:29:15.958 ' 00:29:15.958 13:50:13 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:29:15.958 13:50:13 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:15.958 13:50:13 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:15.958 13:50:13 env -- common/autotest_common.sh@10 -- # set +x 00:29:15.958 ************************************ 00:29:15.958 START TEST env_memory 00:29:15.958 ************************************ 00:29:15.958 13:50:13 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:29:15.958 00:29:15.958 00:29:15.958 CUnit - A unit testing framework for C - Version 2.1-3 00:29:15.958 http://cunit.sourceforge.net/ 00:29:15.958 00:29:15.958 00:29:15.958 Suite: memory 00:29:15.958 Test: alloc and free memory map ...[2024-11-20 13:50:13.108401] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:29:15.958 passed 00:29:15.958 Test: mem map translation ...[2024-11-20 13:50:13.129791] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:29:15.958 [2024-11-20 13:50:13.129815] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:29:15.958 [2024-11-20 13:50:13.129849] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:29:15.958 [2024-11-20 13:50:13.129854] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:29:15.958 passed 00:29:15.958 Test: mem map registration ...[2024-11-20 13:50:13.172814] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:29:15.958 [2024-11-20 13:50:13.172847] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:29:15.958 passed 00:29:15.958 Test: mem map adjacent registrations ...passed 00:29:15.958 00:29:15.958 Run Summary: Type Total Ran Passed Failed Inactive 00:29:15.958 suites 1 1 n/a 0 0 00:29:15.958 tests 4 4 4 0 0 00:29:15.958 asserts 152 152 152 0 n/a 00:29:15.958 00:29:15.958 Elapsed time = 0.156 seconds 00:29:15.958 00:29:15.958 real 0m0.177s 00:29:15.958 user 0m0.165s 00:29:15.958 sys 0m0.007s 00:29:15.958 13:50:13 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:15.958 13:50:13 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:29:15.958 ************************************ 00:29:15.958 END TEST env_memory 00:29:15.958 ************************************ 00:29:16.218 13:50:13 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:29:16.218 13:50:13 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:16.218 13:50:13 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:16.218 13:50:13 env -- common/autotest_common.sh@10 -- # set +x 00:29:16.218 ************************************ 00:29:16.218 START TEST env_vtophys 00:29:16.218 ************************************ 00:29:16.218 13:50:13 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:29:16.218 EAL: lib.eal log level changed from notice to debug 00:29:16.218 EAL: Detected lcore 0 as core 0 on socket 0 00:29:16.218 EAL: Detected lcore 1 as core 0 on socket 0 00:29:16.218 EAL: Detected lcore 2 as core 0 on socket 0 00:29:16.218 EAL: Detected lcore 3 as core 0 on socket 0 00:29:16.218 EAL: Detected lcore 4 as core 0 on socket 0 00:29:16.218 EAL: Detected lcore 5 as core 0 on socket 0 00:29:16.218 EAL: Detected lcore 6 as core 0 on socket 0 00:29:16.218 EAL: Detected lcore 7 as core 0 on socket 0 00:29:16.218 EAL: Detected lcore 8 as core 0 on socket 0 00:29:16.218 EAL: Detected lcore 9 as core 0 on socket 0 00:29:16.218 EAL: Maximum logical cores by configuration: 128 00:29:16.218 EAL: Detected CPU lcores: 10 00:29:16.218 EAL: Detected NUMA nodes: 1 00:29:16.218 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:29:16.218 EAL: Detected shared linkage of DPDK 00:29:16.218 EAL: No shared files mode enabled, IPC will be disabled 00:29:16.218 EAL: Selected IOVA mode 'PA' 00:29:16.218 EAL: Probing VFIO support... 00:29:16.218 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:29:16.218 EAL: VFIO modules not loaded, skipping VFIO support... 00:29:16.218 EAL: Ask a virtual area of 0x2e000 bytes 00:29:16.218 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:29:16.218 EAL: Setting up physically contiguous memory... 00:29:16.218 EAL: Setting maximum number of open files to 524288 00:29:16.218 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:29:16.218 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:29:16.218 EAL: Ask a virtual area of 0x61000 bytes 00:29:16.218 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:29:16.218 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:29:16.218 EAL: Ask a virtual area of 0x400000000 bytes 00:29:16.218 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:29:16.218 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:29:16.218 EAL: Ask a virtual area of 0x61000 bytes 00:29:16.218 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:29:16.218 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:29:16.218 EAL: Ask a virtual area of 0x400000000 bytes 00:29:16.218 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:29:16.218 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:29:16.218 EAL: Ask a virtual area of 0x61000 bytes 00:29:16.218 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:29:16.218 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:29:16.218 EAL: Ask a virtual area of 0x400000000 bytes 00:29:16.218 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:29:16.218 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:29:16.218 EAL: Ask a virtual area of 0x61000 bytes 00:29:16.218 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:29:16.218 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:29:16.218 EAL: Ask a virtual area of 0x400000000 bytes 00:29:16.218 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:29:16.218 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:29:16.218 EAL: Hugepages will be freed exactly as allocated. 00:29:16.218 EAL: No shared files mode enabled, IPC is disabled 00:29:16.218 EAL: No shared files mode enabled, IPC is disabled 00:29:16.218 EAL: TSC frequency is ~2290000 KHz 00:29:16.218 EAL: Main lcore 0 is ready (tid=7f47c7c8ba00;cpuset=[0]) 00:29:16.218 EAL: Trying to obtain current memory policy. 00:29:16.218 EAL: Setting policy MPOL_PREFERRED for socket 0 00:29:16.218 EAL: Restoring previous memory policy: 0 00:29:16.218 EAL: request: mp_malloc_sync 00:29:16.218 EAL: No shared files mode enabled, IPC is disabled 00:29:16.218 EAL: Heap on socket 0 was expanded by 2MB 00:29:16.218 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:29:16.218 EAL: No PCI address specified using 'addr=' in: bus=pci 00:29:16.218 EAL: Mem event callback 'spdk:(nil)' registered 00:29:16.218 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:29:16.218 00:29:16.218 00:29:16.218 CUnit - A unit testing framework for C - Version 2.1-3 00:29:16.218 http://cunit.sourceforge.net/ 00:29:16.218 00:29:16.218 00:29:16.218 Suite: components_suite 00:29:16.218 Test: vtophys_malloc_test ...passed 00:29:16.218 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:29:16.218 EAL: Setting policy MPOL_PREFERRED for socket 0 00:29:16.218 EAL: Restoring previous memory policy: 4 00:29:16.218 EAL: Calling mem event callback 'spdk:(nil)' 00:29:16.218 EAL: request: mp_malloc_sync 00:29:16.218 EAL: No shared files mode enabled, IPC is disabled 00:29:16.218 EAL: Heap on socket 0 was expanded by 4MB 00:29:16.218 EAL: Calling mem event callback 'spdk:(nil)' 00:29:16.218 EAL: request: mp_malloc_sync 00:29:16.218 EAL: No shared files mode enabled, IPC is disabled 00:29:16.218 EAL: Heap on socket 0 was shrunk by 4MB 00:29:16.218 EAL: Trying to obtain current memory policy. 00:29:16.218 EAL: Setting policy MPOL_PREFERRED for socket 0 00:29:16.218 EAL: Restoring previous memory policy: 4 00:29:16.218 EAL: Calling mem event callback 'spdk:(nil)' 00:29:16.218 EAL: request: mp_malloc_sync 00:29:16.218 EAL: No shared files mode enabled, IPC is disabled 00:29:16.218 EAL: Heap on socket 0 was expanded by 6MB 00:29:16.218 EAL: Calling mem event callback 'spdk:(nil)' 00:29:16.218 EAL: request: mp_malloc_sync 00:29:16.219 EAL: No shared files mode enabled, IPC is disabled 00:29:16.219 EAL: Heap on socket 0 was shrunk by 6MB 00:29:16.219 EAL: Trying to obtain current memory policy. 00:29:16.219 EAL: Setting policy MPOL_PREFERRED for socket 0 00:29:16.219 EAL: Restoring previous memory policy: 4 00:29:16.219 EAL: Calling mem event callback 'spdk:(nil)' 00:29:16.219 EAL: request: mp_malloc_sync 00:29:16.219 EAL: No shared files mode enabled, IPC is disabled 00:29:16.219 EAL: Heap on socket 0 was expanded by 10MB 00:29:16.219 EAL: Calling mem event callback 'spdk:(nil)' 00:29:16.219 EAL: request: mp_malloc_sync 00:29:16.219 EAL: No shared files mode enabled, IPC is disabled 00:29:16.219 EAL: Heap on socket 0 was shrunk by 10MB 00:29:16.219 EAL: Trying to obtain current memory policy. 00:29:16.219 EAL: Setting policy MPOL_PREFERRED for socket 0 00:29:16.219 EAL: Restoring previous memory policy: 4 00:29:16.219 EAL: Calling mem event callback 'spdk:(nil)' 00:29:16.219 EAL: request: mp_malloc_sync 00:29:16.219 EAL: No shared files mode enabled, IPC is disabled 00:29:16.219 EAL: Heap on socket 0 was expanded by 18MB 00:29:16.219 EAL: Calling mem event callback 'spdk:(nil)' 00:29:16.219 EAL: request: mp_malloc_sync 00:29:16.219 EAL: No shared files mode enabled, IPC is disabled 00:29:16.219 EAL: Heap on socket 0 was shrunk by 18MB 00:29:16.219 EAL: Trying to obtain current memory policy. 00:29:16.219 EAL: Setting policy MPOL_PREFERRED for socket 0 00:29:16.219 EAL: Restoring previous memory policy: 4 00:29:16.219 EAL: Calling mem event callback 'spdk:(nil)' 00:29:16.219 EAL: request: mp_malloc_sync 00:29:16.219 EAL: No shared files mode enabled, IPC is disabled 00:29:16.219 EAL: Heap on socket 0 was expanded by 34MB 00:29:16.219 EAL: Calling mem event callback 'spdk:(nil)' 00:29:16.219 EAL: request: mp_malloc_sync 00:29:16.219 EAL: No shared files mode enabled, IPC is disabled 00:29:16.219 EAL: Heap on socket 0 was shrunk by 34MB 00:29:16.219 EAL: Trying to obtain current memory policy. 00:29:16.219 EAL: Setting policy MPOL_PREFERRED for socket 0 00:29:16.219 EAL: Restoring previous memory policy: 4 00:29:16.219 EAL: Calling mem event callback 'spdk:(nil)' 00:29:16.219 EAL: request: mp_malloc_sync 00:29:16.219 EAL: No shared files mode enabled, IPC is disabled 00:29:16.219 EAL: Heap on socket 0 was expanded by 66MB 00:29:16.219 EAL: Calling mem event callback 'spdk:(nil)' 00:29:16.219 EAL: request: mp_malloc_sync 00:29:16.219 EAL: No shared files mode enabled, IPC is disabled 00:29:16.219 EAL: Heap on socket 0 was shrunk by 66MB 00:29:16.219 EAL: Trying to obtain current memory policy. 00:29:16.219 EAL: Setting policy MPOL_PREFERRED for socket 0 00:29:16.481 EAL: Restoring previous memory policy: 4 00:29:16.481 EAL: Calling mem event callback 'spdk:(nil)' 00:29:16.481 EAL: request: mp_malloc_sync 00:29:16.481 EAL: No shared files mode enabled, IPC is disabled 00:29:16.481 EAL: Heap on socket 0 was expanded by 130MB 00:29:16.481 EAL: Calling mem event callback 'spdk:(nil)' 00:29:16.481 EAL: request: mp_malloc_sync 00:29:16.481 EAL: No shared files mode enabled, IPC is disabled 00:29:16.481 EAL: Heap on socket 0 was shrunk by 130MB 00:29:16.481 EAL: Trying to obtain current memory policy. 00:29:16.481 EAL: Setting policy MPOL_PREFERRED for socket 0 00:29:16.481 EAL: Restoring previous memory policy: 4 00:29:16.481 EAL: Calling mem event callback 'spdk:(nil)' 00:29:16.481 EAL: request: mp_malloc_sync 00:29:16.481 EAL: No shared files mode enabled, IPC is disabled 00:29:16.481 EAL: Heap on socket 0 was expanded by 258MB 00:29:16.481 EAL: Calling mem event callback 'spdk:(nil)' 00:29:16.481 EAL: request: mp_malloc_sync 00:29:16.481 EAL: No shared files mode enabled, IPC is disabled 00:29:16.481 EAL: Heap on socket 0 was shrunk by 258MB 00:29:16.481 EAL: Trying to obtain current memory policy. 00:29:16.481 EAL: Setting policy MPOL_PREFERRED for socket 0 00:29:16.743 EAL: Restoring previous memory policy: 4 00:29:16.743 EAL: Calling mem event callback 'spdk:(nil)' 00:29:16.743 EAL: request: mp_malloc_sync 00:29:16.743 EAL: No shared files mode enabled, IPC is disabled 00:29:16.743 EAL: Heap on socket 0 was expanded by 514MB 00:29:16.743 EAL: Calling mem event callback 'spdk:(nil)' 00:29:16.743 EAL: request: mp_malloc_sync 00:29:16.743 EAL: No shared files mode enabled, IPC is disabled 00:29:16.743 EAL: Heap on socket 0 was shrunk by 514MB 00:29:16.743 EAL: Trying to obtain current memory policy. 00:29:16.743 EAL: Setting policy MPOL_PREFERRED for socket 0 00:29:17.003 EAL: Restoring previous memory policy: 4 00:29:17.003 EAL: Calling mem event callback 'spdk:(nil)' 00:29:17.003 EAL: request: mp_malloc_sync 00:29:17.003 EAL: No shared files mode enabled, IPC is disabled 00:29:17.003 EAL: Heap on socket 0 was expanded by 1026MB 00:29:17.003 EAL: Calling mem event callback 'spdk:(nil)' 00:29:17.264 passed 00:29:17.264 00:29:17.264 Run Summary: Type Total Ran Passed Failed Inactive 00:29:17.264 suites 1 1 n/a 0 0 00:29:17.264 tests 2 2 2 0 0 00:29:17.264 asserts 5505 5505 5505 0 n/a 00:29:17.264 00:29:17.264 Elapsed time = 0.985 seconds 00:29:17.264 EAL: request: mp_malloc_sync 00:29:17.264 EAL: No shared files mode enabled, IPC is disabled 00:29:17.264 EAL: Heap on socket 0 was shrunk by 1026MB 00:29:17.264 EAL: Calling mem event callback 'spdk:(nil)' 00:29:17.264 EAL: request: mp_malloc_sync 00:29:17.264 EAL: No shared files mode enabled, IPC is disabled 00:29:17.264 EAL: Heap on socket 0 was shrunk by 2MB 00:29:17.264 EAL: No shared files mode enabled, IPC is disabled 00:29:17.264 EAL: No shared files mode enabled, IPC is disabled 00:29:17.264 EAL: No shared files mode enabled, IPC is disabled 00:29:17.264 00:29:17.264 real 0m1.191s 00:29:17.264 user 0m0.650s 00:29:17.264 sys 0m0.416s 00:29:17.264 13:50:14 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:17.264 13:50:14 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:29:17.264 ************************************ 00:29:17.264 END TEST env_vtophys 00:29:17.264 ************************************ 00:29:17.264 13:50:14 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:29:17.264 13:50:14 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:17.264 13:50:14 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:17.264 13:50:14 env -- common/autotest_common.sh@10 -- # set +x 00:29:17.264 ************************************ 00:29:17.264 START TEST env_pci 00:29:17.264 ************************************ 00:29:17.264 13:50:14 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:29:17.264 00:29:17.264 00:29:17.264 CUnit - A unit testing framework for C - Version 2.1-3 00:29:17.264 http://cunit.sourceforge.net/ 00:29:17.264 00:29:17.264 00:29:17.264 Suite: pci 00:29:17.264 Test: pci_hook ...[2024-11-20 13:50:14.561541] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56814 has claimed it 00:29:17.264 passed 00:29:17.264 00:29:17.264 Run Summary: Type Total Ran Passed Failed Inactive 00:29:17.264 suites 1 1 n/a 0 0 00:29:17.264 tests 1 1 1 0 0 00:29:17.264 asserts 25 25 25 0 n/a 00:29:17.264 00:29:17.264 Elapsed time = 0.002 seconds 00:29:17.264 EAL: Cannot find device (10000:00:01.0) 00:29:17.264 EAL: Failed to attach device on primary process 00:29:17.264 00:29:17.264 real 0m0.021s 00:29:17.264 user 0m0.007s 00:29:17.264 sys 0m0.014s 00:29:17.264 13:50:14 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:17.264 13:50:14 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:29:17.264 ************************************ 00:29:17.264 END TEST env_pci 00:29:17.264 ************************************ 00:29:17.524 13:50:14 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:29:17.524 13:50:14 env -- env/env.sh@15 -- # uname 00:29:17.524 13:50:14 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:29:17.524 13:50:14 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:29:17.524 13:50:14 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:29:17.524 13:50:14 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:29:17.524 13:50:14 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:17.524 13:50:14 env -- common/autotest_common.sh@10 -- # set +x 00:29:17.524 ************************************ 00:29:17.524 START TEST env_dpdk_post_init 00:29:17.524 ************************************ 00:29:17.524 13:50:14 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:29:17.524 EAL: Detected CPU lcores: 10 00:29:17.524 EAL: Detected NUMA nodes: 1 00:29:17.524 EAL: Detected shared linkage of DPDK 00:29:17.524 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:29:17.524 EAL: Selected IOVA mode 'PA' 00:29:17.524 TELEMETRY: No legacy callbacks, legacy socket not created 00:29:17.524 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:29:17.524 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:29:17.524 Starting DPDK initialization... 00:29:17.524 Starting SPDK post initialization... 00:29:17.524 SPDK NVMe probe 00:29:17.524 Attaching to 0000:00:10.0 00:29:17.524 Attaching to 0000:00:11.0 00:29:17.524 Attached to 0000:00:10.0 00:29:17.524 Attached to 0000:00:11.0 00:29:17.524 Cleaning up... 00:29:17.524 00:29:17.524 real 0m0.199s 00:29:17.524 user 0m0.053s 00:29:17.524 sys 0m0.046s 00:29:17.524 13:50:14 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:17.524 13:50:14 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:29:17.524 ************************************ 00:29:17.524 END TEST env_dpdk_post_init 00:29:17.524 ************************************ 00:29:17.783 13:50:14 env -- env/env.sh@26 -- # uname 00:29:17.783 13:50:14 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:29:17.783 13:50:14 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:29:17.783 13:50:14 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:17.783 13:50:14 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:17.783 13:50:14 env -- common/autotest_common.sh@10 -- # set +x 00:29:17.783 ************************************ 00:29:17.783 START TEST env_mem_callbacks 00:29:17.783 ************************************ 00:29:17.783 13:50:14 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:29:17.783 EAL: Detected CPU lcores: 10 00:29:17.783 EAL: Detected NUMA nodes: 1 00:29:17.783 EAL: Detected shared linkage of DPDK 00:29:17.783 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:29:17.783 EAL: Selected IOVA mode 'PA' 00:29:17.783 00:29:17.783 00:29:17.783 CUnit - A unit testing framework for C - Version 2.1-3 00:29:17.783 http://cunit.sourceforge.net/ 00:29:17.783 00:29:17.783 00:29:17.783 Suite: memory 00:29:17.783 Test: test ... 00:29:17.783 register 0x200000200000 2097152 00:29:17.783 TELEMETRY: No legacy callbacks, legacy socket not created 00:29:17.783 malloc 3145728 00:29:17.783 register 0x200000400000 4194304 00:29:17.783 buf 0x200000500000 len 3145728 PASSED 00:29:17.783 malloc 64 00:29:17.783 buf 0x2000004fff40 len 64 PASSED 00:29:17.783 malloc 4194304 00:29:17.783 register 0x200000800000 6291456 00:29:17.783 buf 0x200000a00000 len 4194304 PASSED 00:29:17.783 free 0x200000500000 3145728 00:29:17.783 free 0x2000004fff40 64 00:29:17.783 unregister 0x200000400000 4194304 PASSED 00:29:17.783 free 0x200000a00000 4194304 00:29:17.783 unregister 0x200000800000 6291456 PASSED 00:29:17.783 malloc 8388608 00:29:17.783 register 0x200000400000 10485760 00:29:17.783 buf 0x200000600000 len 8388608 PASSED 00:29:17.783 free 0x200000600000 8388608 00:29:17.783 unregister 0x200000400000 10485760 PASSED 00:29:17.783 passed 00:29:17.783 00:29:17.783 Run Summary: Type Total Ran Passed Failed Inactive 00:29:17.783 suites 1 1 n/a 0 0 00:29:17.783 tests 1 1 1 0 0 00:29:17.783 asserts 15 15 15 0 n/a 00:29:17.783 00:29:17.783 Elapsed time = 0.009 seconds 00:29:17.783 00:29:17.783 real 0m0.147s 00:29:17.783 user 0m0.016s 00:29:17.783 sys 0m0.030s 00:29:17.783 13:50:15 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:17.783 13:50:15 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:29:17.783 ************************************ 00:29:17.783 END TEST env_mem_callbacks 00:29:17.783 ************************************ 00:29:18.043 00:29:18.043 real 0m2.265s 00:29:18.043 user 0m1.101s 00:29:18.043 sys 0m0.848s 00:29:18.043 13:50:15 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:18.043 13:50:15 env -- common/autotest_common.sh@10 -- # set +x 00:29:18.043 ************************************ 00:29:18.043 END TEST env 00:29:18.043 ************************************ 00:29:18.043 13:50:15 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:29:18.043 13:50:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:18.043 13:50:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:18.043 13:50:15 -- common/autotest_common.sh@10 -- # set +x 00:29:18.043 ************************************ 00:29:18.043 START TEST rpc 00:29:18.043 ************************************ 00:29:18.043 13:50:15 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:29:18.043 * Looking for test storage... 00:29:18.043 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:29:18.043 13:50:15 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:18.043 13:50:15 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:29:18.043 13:50:15 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:18.043 13:50:15 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:18.043 13:50:15 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:18.043 13:50:15 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:18.043 13:50:15 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:18.043 13:50:15 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:29:18.043 13:50:15 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:29:18.043 13:50:15 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:29:18.043 13:50:15 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:29:18.043 13:50:15 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:29:18.043 13:50:15 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:29:18.043 13:50:15 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:29:18.043 13:50:15 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:18.043 13:50:15 rpc -- scripts/common.sh@344 -- # case "$op" in 00:29:18.043 13:50:15 rpc -- scripts/common.sh@345 -- # : 1 00:29:18.043 13:50:15 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:18.043 13:50:15 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:18.043 13:50:15 rpc -- scripts/common.sh@365 -- # decimal 1 00:29:18.043 13:50:15 rpc -- scripts/common.sh@353 -- # local d=1 00:29:18.043 13:50:15 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:18.043 13:50:15 rpc -- scripts/common.sh@355 -- # echo 1 00:29:18.043 13:50:15 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:29:18.043 13:50:15 rpc -- scripts/common.sh@366 -- # decimal 2 00:29:18.302 13:50:15 rpc -- scripts/common.sh@353 -- # local d=2 00:29:18.302 13:50:15 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:18.302 13:50:15 rpc -- scripts/common.sh@355 -- # echo 2 00:29:18.302 13:50:15 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:29:18.302 13:50:15 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:18.302 13:50:15 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:18.302 13:50:15 rpc -- scripts/common.sh@368 -- # return 0 00:29:18.302 13:50:15 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:18.302 13:50:15 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:18.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.302 --rc genhtml_branch_coverage=1 00:29:18.302 --rc genhtml_function_coverage=1 00:29:18.302 --rc genhtml_legend=1 00:29:18.302 --rc geninfo_all_blocks=1 00:29:18.302 --rc geninfo_unexecuted_blocks=1 00:29:18.302 00:29:18.303 ' 00:29:18.303 13:50:15 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:18.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.303 --rc genhtml_branch_coverage=1 00:29:18.303 --rc genhtml_function_coverage=1 00:29:18.303 --rc genhtml_legend=1 00:29:18.303 --rc geninfo_all_blocks=1 00:29:18.303 --rc geninfo_unexecuted_blocks=1 00:29:18.303 00:29:18.303 ' 00:29:18.303 13:50:15 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:18.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.303 --rc genhtml_branch_coverage=1 00:29:18.303 --rc genhtml_function_coverage=1 00:29:18.303 --rc genhtml_legend=1 00:29:18.303 --rc geninfo_all_blocks=1 00:29:18.303 --rc geninfo_unexecuted_blocks=1 00:29:18.303 00:29:18.303 ' 00:29:18.303 13:50:15 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:18.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.303 --rc genhtml_branch_coverage=1 00:29:18.303 --rc genhtml_function_coverage=1 00:29:18.303 --rc genhtml_legend=1 00:29:18.303 --rc geninfo_all_blocks=1 00:29:18.303 --rc geninfo_unexecuted_blocks=1 00:29:18.303 00:29:18.303 ' 00:29:18.303 13:50:15 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56937 00:29:18.303 13:50:15 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:29:18.303 13:50:15 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56937 00:29:18.303 13:50:15 rpc -- common/autotest_common.sh@835 -- # '[' -z 56937 ']' 00:29:18.303 13:50:15 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:18.303 13:50:15 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:29:18.303 13:50:15 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:18.303 13:50:15 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:18.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:18.303 13:50:15 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:18.303 13:50:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:29:18.303 [2024-11-20 13:50:15.431354] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:29:18.303 [2024-11-20 13:50:15.431433] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56937 ] 00:29:18.303 [2024-11-20 13:50:15.574295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:18.562 [2024-11-20 13:50:15.629227] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:29:18.562 [2024-11-20 13:50:15.629289] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56937' to capture a snapshot of events at runtime. 00:29:18.562 [2024-11-20 13:50:15.629295] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:18.562 [2024-11-20 13:50:15.629300] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:18.562 [2024-11-20 13:50:15.629304] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56937 for offline analysis/debug. 00:29:18.562 [2024-11-20 13:50:15.629637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:18.562 [2024-11-20 13:50:15.689486] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:29:19.130 13:50:16 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:19.130 13:50:16 rpc -- common/autotest_common.sh@868 -- # return 0 00:29:19.130 13:50:16 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:29:19.130 13:50:16 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:29:19.130 13:50:16 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:29:19.130 13:50:16 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:29:19.130 13:50:16 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:19.130 13:50:16 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:19.130 13:50:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:29:19.130 ************************************ 00:29:19.130 START TEST rpc_integrity 00:29:19.130 ************************************ 00:29:19.130 13:50:16 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:29:19.130 13:50:16 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:29:19.130 13:50:16 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.130 13:50:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:29:19.130 13:50:16 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.130 13:50:16 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:29:19.130 13:50:16 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:29:19.130 13:50:16 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:29:19.130 13:50:16 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:29:19.130 13:50:16 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.130 13:50:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:29:19.130 13:50:16 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.130 13:50:16 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:29:19.130 13:50:16 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:29:19.130 13:50:16 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.130 13:50:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:29:19.130 13:50:16 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.130 13:50:16 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:29:19.130 { 00:29:19.130 "name": "Malloc0", 00:29:19.130 "aliases": [ 00:29:19.130 "470a7d1a-6f0d-416a-87ac-7572b9f6b044" 00:29:19.130 ], 00:29:19.130 "product_name": "Malloc disk", 00:29:19.130 "block_size": 512, 00:29:19.130 "num_blocks": 16384, 00:29:19.130 "uuid": "470a7d1a-6f0d-416a-87ac-7572b9f6b044", 00:29:19.130 "assigned_rate_limits": { 00:29:19.130 "rw_ios_per_sec": 0, 00:29:19.130 "rw_mbytes_per_sec": 0, 00:29:19.130 "r_mbytes_per_sec": 0, 00:29:19.130 "w_mbytes_per_sec": 0 00:29:19.130 }, 00:29:19.130 "claimed": false, 00:29:19.130 "zoned": false, 00:29:19.130 "supported_io_types": { 00:29:19.130 "read": true, 00:29:19.130 "write": true, 00:29:19.130 "unmap": true, 00:29:19.130 "flush": true, 00:29:19.130 "reset": true, 00:29:19.130 "nvme_admin": false, 00:29:19.130 "nvme_io": false, 00:29:19.130 "nvme_io_md": false, 00:29:19.130 "write_zeroes": true, 00:29:19.130 "zcopy": true, 00:29:19.130 "get_zone_info": false, 00:29:19.130 "zone_management": false, 00:29:19.130 "zone_append": false, 00:29:19.130 "compare": false, 00:29:19.130 "compare_and_write": false, 00:29:19.130 "abort": true, 00:29:19.130 "seek_hole": false, 00:29:19.130 "seek_data": false, 00:29:19.130 "copy": true, 00:29:19.130 "nvme_iov_md": false 00:29:19.130 }, 00:29:19.130 "memory_domains": [ 00:29:19.130 { 00:29:19.130 "dma_device_id": "system", 00:29:19.130 "dma_device_type": 1 00:29:19.130 }, 00:29:19.130 { 00:29:19.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:19.130 "dma_device_type": 2 00:29:19.130 } 00:29:19.130 ], 00:29:19.130 "driver_specific": {} 00:29:19.130 } 00:29:19.130 ]' 00:29:19.130 13:50:16 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:29:19.389 13:50:16 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:29:19.389 13:50:16 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:29:19.389 13:50:16 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.389 13:50:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:29:19.389 [2024-11-20 13:50:16.504928] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:29:19.389 [2024-11-20 13:50:16.504974] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:19.389 [2024-11-20 13:50:16.504989] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f6d050 00:29:19.389 [2024-11-20 13:50:16.504995] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:19.389 [2024-11-20 13:50:16.506513] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:19.389 [2024-11-20 13:50:16.506549] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:29:19.389 Passthru0 00:29:19.389 13:50:16 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.389 13:50:16 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:29:19.389 13:50:16 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.389 13:50:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:29:19.389 13:50:16 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.389 13:50:16 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:29:19.389 { 00:29:19.389 "name": "Malloc0", 00:29:19.389 "aliases": [ 00:29:19.389 "470a7d1a-6f0d-416a-87ac-7572b9f6b044" 00:29:19.389 ], 00:29:19.389 "product_name": "Malloc disk", 00:29:19.389 "block_size": 512, 00:29:19.389 "num_blocks": 16384, 00:29:19.389 "uuid": "470a7d1a-6f0d-416a-87ac-7572b9f6b044", 00:29:19.389 "assigned_rate_limits": { 00:29:19.389 "rw_ios_per_sec": 0, 00:29:19.389 "rw_mbytes_per_sec": 0, 00:29:19.389 "r_mbytes_per_sec": 0, 00:29:19.389 "w_mbytes_per_sec": 0 00:29:19.389 }, 00:29:19.389 "claimed": true, 00:29:19.389 "claim_type": "exclusive_write", 00:29:19.389 "zoned": false, 00:29:19.389 "supported_io_types": { 00:29:19.389 "read": true, 00:29:19.389 "write": true, 00:29:19.389 "unmap": true, 00:29:19.389 "flush": true, 00:29:19.389 "reset": true, 00:29:19.389 "nvme_admin": false, 00:29:19.389 "nvme_io": false, 00:29:19.389 "nvme_io_md": false, 00:29:19.389 "write_zeroes": true, 00:29:19.389 "zcopy": true, 00:29:19.389 "get_zone_info": false, 00:29:19.389 "zone_management": false, 00:29:19.389 "zone_append": false, 00:29:19.389 "compare": false, 00:29:19.389 "compare_and_write": false, 00:29:19.389 "abort": true, 00:29:19.389 "seek_hole": false, 00:29:19.389 "seek_data": false, 00:29:19.389 "copy": true, 00:29:19.389 "nvme_iov_md": false 00:29:19.389 }, 00:29:19.389 "memory_domains": [ 00:29:19.389 { 00:29:19.389 "dma_device_id": "system", 00:29:19.389 "dma_device_type": 1 00:29:19.389 }, 00:29:19.389 { 00:29:19.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:19.389 "dma_device_type": 2 00:29:19.389 } 00:29:19.389 ], 00:29:19.389 "driver_specific": {} 00:29:19.389 }, 00:29:19.389 { 00:29:19.389 "name": "Passthru0", 00:29:19.389 "aliases": [ 00:29:19.390 "1d673472-5313-56ee-9312-0e9393ac0107" 00:29:19.390 ], 00:29:19.390 "product_name": "passthru", 00:29:19.390 "block_size": 512, 00:29:19.390 "num_blocks": 16384, 00:29:19.390 "uuid": "1d673472-5313-56ee-9312-0e9393ac0107", 00:29:19.390 "assigned_rate_limits": { 00:29:19.390 "rw_ios_per_sec": 0, 00:29:19.390 "rw_mbytes_per_sec": 0, 00:29:19.390 "r_mbytes_per_sec": 0, 00:29:19.390 "w_mbytes_per_sec": 0 00:29:19.390 }, 00:29:19.390 "claimed": false, 00:29:19.390 "zoned": false, 00:29:19.390 "supported_io_types": { 00:29:19.390 "read": true, 00:29:19.390 "write": true, 00:29:19.390 "unmap": true, 00:29:19.390 "flush": true, 00:29:19.390 "reset": true, 00:29:19.390 "nvme_admin": false, 00:29:19.390 "nvme_io": false, 00:29:19.390 "nvme_io_md": false, 00:29:19.390 "write_zeroes": true, 00:29:19.390 "zcopy": true, 00:29:19.390 "get_zone_info": false, 00:29:19.390 "zone_management": false, 00:29:19.390 "zone_append": false, 00:29:19.390 "compare": false, 00:29:19.390 "compare_and_write": false, 00:29:19.390 "abort": true, 00:29:19.390 "seek_hole": false, 00:29:19.390 "seek_data": false, 00:29:19.390 "copy": true, 00:29:19.390 "nvme_iov_md": false 00:29:19.390 }, 00:29:19.390 "memory_domains": [ 00:29:19.390 { 00:29:19.390 "dma_device_id": "system", 00:29:19.390 "dma_device_type": 1 00:29:19.390 }, 00:29:19.390 { 00:29:19.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:19.390 "dma_device_type": 2 00:29:19.390 } 00:29:19.390 ], 00:29:19.390 "driver_specific": { 00:29:19.390 "passthru": { 00:29:19.390 "name": "Passthru0", 00:29:19.390 "base_bdev_name": "Malloc0" 00:29:19.390 } 00:29:19.390 } 00:29:19.390 } 00:29:19.390 ]' 00:29:19.390 13:50:16 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:29:19.390 13:50:16 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:29:19.390 13:50:16 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:29:19.390 13:50:16 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.390 13:50:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:29:19.390 13:50:16 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.390 13:50:16 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:29:19.390 13:50:16 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.390 13:50:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:29:19.390 13:50:16 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.390 13:50:16 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:29:19.390 13:50:16 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.390 13:50:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:29:19.390 13:50:16 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.390 13:50:16 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:29:19.390 13:50:16 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:29:19.390 13:50:16 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:29:19.390 00:29:19.390 real 0m0.332s 00:29:19.390 user 0m0.200s 00:29:19.390 sys 0m0.062s 00:29:19.390 13:50:16 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:19.390 13:50:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:29:19.390 ************************************ 00:29:19.390 END TEST rpc_integrity 00:29:19.390 ************************************ 00:29:19.649 13:50:16 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:29:19.649 13:50:16 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:19.649 13:50:16 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:19.649 13:50:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:29:19.649 ************************************ 00:29:19.649 START TEST rpc_plugins 00:29:19.649 ************************************ 00:29:19.649 13:50:16 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:29:19.649 13:50:16 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:29:19.649 13:50:16 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.649 13:50:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:29:19.649 13:50:16 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.649 13:50:16 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:29:19.649 13:50:16 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:29:19.649 13:50:16 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.649 13:50:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:29:19.649 13:50:16 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.649 13:50:16 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:29:19.649 { 00:29:19.649 "name": "Malloc1", 00:29:19.649 "aliases": [ 00:29:19.649 "0040cf3b-202f-485d-b565-1f7a7efe8081" 00:29:19.649 ], 00:29:19.649 "product_name": "Malloc disk", 00:29:19.649 "block_size": 4096, 00:29:19.649 "num_blocks": 256, 00:29:19.649 "uuid": "0040cf3b-202f-485d-b565-1f7a7efe8081", 00:29:19.649 "assigned_rate_limits": { 00:29:19.649 "rw_ios_per_sec": 0, 00:29:19.649 "rw_mbytes_per_sec": 0, 00:29:19.649 "r_mbytes_per_sec": 0, 00:29:19.649 "w_mbytes_per_sec": 0 00:29:19.649 }, 00:29:19.649 "claimed": false, 00:29:19.649 "zoned": false, 00:29:19.649 "supported_io_types": { 00:29:19.649 "read": true, 00:29:19.649 "write": true, 00:29:19.649 "unmap": true, 00:29:19.649 "flush": true, 00:29:19.649 "reset": true, 00:29:19.649 "nvme_admin": false, 00:29:19.649 "nvme_io": false, 00:29:19.649 "nvme_io_md": false, 00:29:19.649 "write_zeroes": true, 00:29:19.649 "zcopy": true, 00:29:19.649 "get_zone_info": false, 00:29:19.649 "zone_management": false, 00:29:19.649 "zone_append": false, 00:29:19.649 "compare": false, 00:29:19.649 "compare_and_write": false, 00:29:19.649 "abort": true, 00:29:19.649 "seek_hole": false, 00:29:19.649 "seek_data": false, 00:29:19.649 "copy": true, 00:29:19.649 "nvme_iov_md": false 00:29:19.649 }, 00:29:19.649 "memory_domains": [ 00:29:19.649 { 00:29:19.649 "dma_device_id": "system", 00:29:19.649 "dma_device_type": 1 00:29:19.649 }, 00:29:19.649 { 00:29:19.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:19.649 "dma_device_type": 2 00:29:19.649 } 00:29:19.649 ], 00:29:19.649 "driver_specific": {} 00:29:19.649 } 00:29:19.649 ]' 00:29:19.649 13:50:16 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:29:19.649 13:50:16 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:29:19.649 13:50:16 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:29:19.649 13:50:16 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.649 13:50:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:29:19.649 13:50:16 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.649 13:50:16 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:29:19.649 13:50:16 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.649 13:50:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:29:19.649 13:50:16 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.649 13:50:16 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:29:19.649 13:50:16 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:29:19.649 13:50:16 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:29:19.649 00:29:19.649 real 0m0.174s 00:29:19.649 user 0m0.105s 00:29:19.649 sys 0m0.025s 00:29:19.649 13:50:16 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:19.649 13:50:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:29:19.649 ************************************ 00:29:19.649 END TEST rpc_plugins 00:29:19.649 ************************************ 00:29:19.649 13:50:16 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:29:19.649 13:50:16 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:19.649 13:50:16 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:19.649 13:50:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:29:19.649 ************************************ 00:29:19.649 START TEST rpc_trace_cmd_test 00:29:19.649 ************************************ 00:29:19.649 13:50:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:29:19.649 13:50:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:29:19.908 13:50:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:29:19.908 13:50:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.908 13:50:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:29:19.908 13:50:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.908 13:50:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:29:19.908 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56937", 00:29:19.908 "tpoint_group_mask": "0x8", 00:29:19.908 "iscsi_conn": { 00:29:19.908 "mask": "0x2", 00:29:19.908 "tpoint_mask": "0x0" 00:29:19.908 }, 00:29:19.908 "scsi": { 00:29:19.908 "mask": "0x4", 00:29:19.908 "tpoint_mask": "0x0" 00:29:19.908 }, 00:29:19.908 "bdev": { 00:29:19.908 "mask": "0x8", 00:29:19.908 "tpoint_mask": "0xffffffffffffffff" 00:29:19.908 }, 00:29:19.908 "nvmf_rdma": { 00:29:19.908 "mask": "0x10", 00:29:19.908 "tpoint_mask": "0x0" 00:29:19.908 }, 00:29:19.908 "nvmf_tcp": { 00:29:19.908 "mask": "0x20", 00:29:19.908 "tpoint_mask": "0x0" 00:29:19.908 }, 00:29:19.908 "ftl": { 00:29:19.908 "mask": "0x40", 00:29:19.908 "tpoint_mask": "0x0" 00:29:19.908 }, 00:29:19.908 "blobfs": { 00:29:19.908 "mask": "0x80", 00:29:19.908 "tpoint_mask": "0x0" 00:29:19.908 }, 00:29:19.908 "dsa": { 00:29:19.908 "mask": "0x200", 00:29:19.908 "tpoint_mask": "0x0" 00:29:19.908 }, 00:29:19.908 "thread": { 00:29:19.908 "mask": "0x400", 00:29:19.908 "tpoint_mask": "0x0" 00:29:19.908 }, 00:29:19.908 "nvme_pcie": { 00:29:19.908 "mask": "0x800", 00:29:19.908 "tpoint_mask": "0x0" 00:29:19.908 }, 00:29:19.908 "iaa": { 00:29:19.908 "mask": "0x1000", 00:29:19.908 "tpoint_mask": "0x0" 00:29:19.908 }, 00:29:19.908 "nvme_tcp": { 00:29:19.908 "mask": "0x2000", 00:29:19.908 "tpoint_mask": "0x0" 00:29:19.908 }, 00:29:19.908 "bdev_nvme": { 00:29:19.908 "mask": "0x4000", 00:29:19.908 "tpoint_mask": "0x0" 00:29:19.908 }, 00:29:19.908 "sock": { 00:29:19.908 "mask": "0x8000", 00:29:19.908 "tpoint_mask": "0x0" 00:29:19.908 }, 00:29:19.908 "blob": { 00:29:19.908 "mask": "0x10000", 00:29:19.908 "tpoint_mask": "0x0" 00:29:19.908 }, 00:29:19.908 "bdev_raid": { 00:29:19.908 "mask": "0x20000", 00:29:19.908 "tpoint_mask": "0x0" 00:29:19.908 }, 00:29:19.908 "scheduler": { 00:29:19.908 "mask": "0x40000", 00:29:19.908 "tpoint_mask": "0x0" 00:29:19.908 } 00:29:19.908 }' 00:29:19.908 13:50:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:29:19.908 13:50:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:29:19.908 13:50:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:29:19.908 13:50:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:29:19.908 13:50:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:29:19.908 13:50:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:29:19.908 13:50:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:29:19.908 13:50:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:29:19.908 13:50:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:29:19.908 13:50:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:29:19.908 00:29:19.908 real 0m0.247s 00:29:19.908 user 0m0.199s 00:29:19.908 sys 0m0.036s 00:29:19.908 13:50:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:19.908 13:50:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:29:19.908 ************************************ 00:29:19.908 END TEST rpc_trace_cmd_test 00:29:19.908 ************************************ 00:29:20.167 13:50:17 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:29:20.167 13:50:17 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:29:20.167 13:50:17 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:29:20.167 13:50:17 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:20.167 13:50:17 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:20.167 13:50:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:29:20.167 ************************************ 00:29:20.167 START TEST rpc_daemon_integrity 00:29:20.167 ************************************ 00:29:20.167 13:50:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:29:20.167 13:50:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:29:20.167 13:50:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.167 13:50:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:29:20.167 13:50:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.167 13:50:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:29:20.167 13:50:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:29:20.167 13:50:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:29:20.167 13:50:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:29:20.167 13:50:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.167 13:50:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:29:20.167 13:50:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.167 13:50:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:29:20.167 13:50:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:29:20.167 13:50:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.167 13:50:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:29:20.167 13:50:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.167 13:50:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:29:20.167 { 00:29:20.167 "name": "Malloc2", 00:29:20.167 "aliases": [ 00:29:20.167 "8b70afa1-6160-4c34-81be-a0609a0ea775" 00:29:20.167 ], 00:29:20.167 "product_name": "Malloc disk", 00:29:20.167 "block_size": 512, 00:29:20.167 "num_blocks": 16384, 00:29:20.167 "uuid": "8b70afa1-6160-4c34-81be-a0609a0ea775", 00:29:20.167 "assigned_rate_limits": { 00:29:20.167 "rw_ios_per_sec": 0, 00:29:20.167 "rw_mbytes_per_sec": 0, 00:29:20.167 "r_mbytes_per_sec": 0, 00:29:20.167 "w_mbytes_per_sec": 0 00:29:20.167 }, 00:29:20.167 "claimed": false, 00:29:20.167 "zoned": false, 00:29:20.167 "supported_io_types": { 00:29:20.167 "read": true, 00:29:20.167 "write": true, 00:29:20.167 "unmap": true, 00:29:20.167 "flush": true, 00:29:20.167 "reset": true, 00:29:20.167 "nvme_admin": false, 00:29:20.167 "nvme_io": false, 00:29:20.167 "nvme_io_md": false, 00:29:20.167 "write_zeroes": true, 00:29:20.167 "zcopy": true, 00:29:20.167 "get_zone_info": false, 00:29:20.167 "zone_management": false, 00:29:20.167 "zone_append": false, 00:29:20.168 "compare": false, 00:29:20.168 "compare_and_write": false, 00:29:20.168 "abort": true, 00:29:20.168 "seek_hole": false, 00:29:20.168 "seek_data": false, 00:29:20.168 "copy": true, 00:29:20.168 "nvme_iov_md": false 00:29:20.168 }, 00:29:20.168 "memory_domains": [ 00:29:20.168 { 00:29:20.168 "dma_device_id": "system", 00:29:20.168 "dma_device_type": 1 00:29:20.168 }, 00:29:20.168 { 00:29:20.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:20.168 "dma_device_type": 2 00:29:20.168 } 00:29:20.168 ], 00:29:20.168 "driver_specific": {} 00:29:20.168 } 00:29:20.168 ]' 00:29:20.168 13:50:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:29:20.168 13:50:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:29:20.168 13:50:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:29:20.168 13:50:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.168 13:50:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:29:20.168 [2024-11-20 13:50:17.435603] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:29:20.168 [2024-11-20 13:50:17.435664] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:20.168 [2024-11-20 13:50:17.435681] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f78030 00:29:20.168 [2024-11-20 13:50:17.435687] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:20.168 [2024-11-20 13:50:17.437280] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:20.168 [2024-11-20 13:50:17.437319] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:29:20.168 Passthru0 00:29:20.168 13:50:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.168 13:50:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:29:20.168 13:50:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.168 13:50:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:29:20.168 13:50:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.168 13:50:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:29:20.168 { 00:29:20.168 "name": "Malloc2", 00:29:20.168 "aliases": [ 00:29:20.168 "8b70afa1-6160-4c34-81be-a0609a0ea775" 00:29:20.168 ], 00:29:20.168 "product_name": "Malloc disk", 00:29:20.168 "block_size": 512, 00:29:20.168 "num_blocks": 16384, 00:29:20.168 "uuid": "8b70afa1-6160-4c34-81be-a0609a0ea775", 00:29:20.168 "assigned_rate_limits": { 00:29:20.168 "rw_ios_per_sec": 0, 00:29:20.168 "rw_mbytes_per_sec": 0, 00:29:20.168 "r_mbytes_per_sec": 0, 00:29:20.168 "w_mbytes_per_sec": 0 00:29:20.168 }, 00:29:20.168 "claimed": true, 00:29:20.168 "claim_type": "exclusive_write", 00:29:20.168 "zoned": false, 00:29:20.168 "supported_io_types": { 00:29:20.168 "read": true, 00:29:20.168 "write": true, 00:29:20.168 "unmap": true, 00:29:20.168 "flush": true, 00:29:20.168 "reset": true, 00:29:20.168 "nvme_admin": false, 00:29:20.168 "nvme_io": false, 00:29:20.168 "nvme_io_md": false, 00:29:20.168 "write_zeroes": true, 00:29:20.168 "zcopy": true, 00:29:20.168 "get_zone_info": false, 00:29:20.168 "zone_management": false, 00:29:20.168 "zone_append": false, 00:29:20.168 "compare": false, 00:29:20.168 "compare_and_write": false, 00:29:20.168 "abort": true, 00:29:20.168 "seek_hole": false, 00:29:20.168 "seek_data": false, 00:29:20.168 "copy": true, 00:29:20.168 "nvme_iov_md": false 00:29:20.168 }, 00:29:20.168 "memory_domains": [ 00:29:20.168 { 00:29:20.168 "dma_device_id": "system", 00:29:20.168 "dma_device_type": 1 00:29:20.168 }, 00:29:20.168 { 00:29:20.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:20.168 "dma_device_type": 2 00:29:20.168 } 00:29:20.168 ], 00:29:20.168 "driver_specific": {} 00:29:20.168 }, 00:29:20.168 { 00:29:20.168 "name": "Passthru0", 00:29:20.168 "aliases": [ 00:29:20.168 "a64c40de-a13f-5f88-a954-8074e2ecfa4c" 00:29:20.168 ], 00:29:20.168 "product_name": "passthru", 00:29:20.168 "block_size": 512, 00:29:20.168 "num_blocks": 16384, 00:29:20.168 "uuid": "a64c40de-a13f-5f88-a954-8074e2ecfa4c", 00:29:20.168 "assigned_rate_limits": { 00:29:20.168 "rw_ios_per_sec": 0, 00:29:20.168 "rw_mbytes_per_sec": 0, 00:29:20.168 "r_mbytes_per_sec": 0, 00:29:20.168 "w_mbytes_per_sec": 0 00:29:20.168 }, 00:29:20.168 "claimed": false, 00:29:20.168 "zoned": false, 00:29:20.168 "supported_io_types": { 00:29:20.168 "read": true, 00:29:20.168 "write": true, 00:29:20.168 "unmap": true, 00:29:20.168 "flush": true, 00:29:20.168 "reset": true, 00:29:20.168 "nvme_admin": false, 00:29:20.168 "nvme_io": false, 00:29:20.168 "nvme_io_md": false, 00:29:20.168 "write_zeroes": true, 00:29:20.168 "zcopy": true, 00:29:20.168 "get_zone_info": false, 00:29:20.168 "zone_management": false, 00:29:20.168 "zone_append": false, 00:29:20.168 "compare": false, 00:29:20.168 "compare_and_write": false, 00:29:20.168 "abort": true, 00:29:20.168 "seek_hole": false, 00:29:20.168 "seek_data": false, 00:29:20.168 "copy": true, 00:29:20.168 "nvme_iov_md": false 00:29:20.168 }, 00:29:20.168 "memory_domains": [ 00:29:20.168 { 00:29:20.168 "dma_device_id": "system", 00:29:20.168 "dma_device_type": 1 00:29:20.168 }, 00:29:20.168 { 00:29:20.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:20.168 "dma_device_type": 2 00:29:20.168 } 00:29:20.168 ], 00:29:20.168 "driver_specific": { 00:29:20.168 "passthru": { 00:29:20.168 "name": "Passthru0", 00:29:20.168 "base_bdev_name": "Malloc2" 00:29:20.168 } 00:29:20.168 } 00:29:20.168 } 00:29:20.168 ]' 00:29:20.168 13:50:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:29:20.428 13:50:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:29:20.428 13:50:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:29:20.428 13:50:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.428 13:50:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:29:20.428 13:50:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.428 13:50:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:29:20.428 13:50:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.428 13:50:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:29:20.428 13:50:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.428 13:50:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:29:20.428 13:50:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.428 13:50:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:29:20.428 13:50:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.428 13:50:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:29:20.428 13:50:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:29:20.428 13:50:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:29:20.428 00:29:20.428 real 0m0.326s 00:29:20.428 user 0m0.194s 00:29:20.428 sys 0m0.055s 00:29:20.428 13:50:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:20.428 13:50:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:29:20.428 ************************************ 00:29:20.428 END TEST rpc_daemon_integrity 00:29:20.428 ************************************ 00:29:20.428 13:50:17 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:29:20.428 13:50:17 rpc -- rpc/rpc.sh@84 -- # killprocess 56937 00:29:20.428 13:50:17 rpc -- common/autotest_common.sh@954 -- # '[' -z 56937 ']' 00:29:20.428 13:50:17 rpc -- common/autotest_common.sh@958 -- # kill -0 56937 00:29:20.428 13:50:17 rpc -- common/autotest_common.sh@959 -- # uname 00:29:20.428 13:50:17 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:20.428 13:50:17 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56937 00:29:20.428 13:50:17 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:20.428 13:50:17 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:20.428 killing process with pid 56937 00:29:20.428 13:50:17 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56937' 00:29:20.428 13:50:17 rpc -- common/autotest_common.sh@973 -- # kill 56937 00:29:20.428 13:50:17 rpc -- common/autotest_common.sh@978 -- # wait 56937 00:29:20.997 00:29:20.997 real 0m2.848s 00:29:20.997 user 0m3.607s 00:29:20.997 sys 0m0.750s 00:29:20.997 13:50:18 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:20.997 13:50:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:29:20.997 ************************************ 00:29:20.997 END TEST rpc 00:29:20.997 ************************************ 00:29:20.997 13:50:18 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:29:20.997 13:50:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:20.997 13:50:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:20.997 13:50:18 -- common/autotest_common.sh@10 -- # set +x 00:29:20.997 ************************************ 00:29:20.997 START TEST skip_rpc 00:29:20.997 ************************************ 00:29:20.997 13:50:18 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:29:20.997 * Looking for test storage... 00:29:20.997 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:29:20.997 13:50:18 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:20.997 13:50:18 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:20.997 13:50:18 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:29:20.997 13:50:18 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:20.997 13:50:18 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:20.997 13:50:18 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:20.997 13:50:18 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:20.997 13:50:18 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:29:20.997 13:50:18 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:29:20.997 13:50:18 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:29:20.997 13:50:18 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:29:20.997 13:50:18 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:29:20.997 13:50:18 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:29:20.997 13:50:18 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:29:20.997 13:50:18 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:20.997 13:50:18 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:29:20.997 13:50:18 skip_rpc -- scripts/common.sh@345 -- # : 1 00:29:20.997 13:50:18 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:20.997 13:50:18 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:20.997 13:50:18 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:29:20.997 13:50:18 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:29:20.997 13:50:18 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:20.997 13:50:18 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:29:20.997 13:50:18 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:29:20.997 13:50:18 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:29:20.997 13:50:18 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:29:20.997 13:50:18 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:20.997 13:50:18 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:29:20.997 13:50:18 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:29:20.997 13:50:18 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:20.997 13:50:18 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:20.997 13:50:18 skip_rpc -- scripts/common.sh@368 -- # return 0 00:29:20.997 13:50:18 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:20.997 13:50:18 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:20.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.997 --rc genhtml_branch_coverage=1 00:29:20.997 --rc genhtml_function_coverage=1 00:29:20.997 --rc genhtml_legend=1 00:29:20.997 --rc geninfo_all_blocks=1 00:29:20.997 --rc geninfo_unexecuted_blocks=1 00:29:20.997 00:29:20.997 ' 00:29:20.997 13:50:18 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:20.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.997 --rc genhtml_branch_coverage=1 00:29:20.997 --rc genhtml_function_coverage=1 00:29:20.997 --rc genhtml_legend=1 00:29:20.997 --rc geninfo_all_blocks=1 00:29:20.997 --rc geninfo_unexecuted_blocks=1 00:29:20.997 00:29:20.997 ' 00:29:20.997 13:50:18 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:20.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.998 --rc genhtml_branch_coverage=1 00:29:20.998 --rc genhtml_function_coverage=1 00:29:20.998 --rc genhtml_legend=1 00:29:20.998 --rc geninfo_all_blocks=1 00:29:20.998 --rc geninfo_unexecuted_blocks=1 00:29:20.998 00:29:20.998 ' 00:29:20.998 13:50:18 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:20.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.998 --rc genhtml_branch_coverage=1 00:29:20.998 --rc genhtml_function_coverage=1 00:29:20.998 --rc genhtml_legend=1 00:29:20.998 --rc geninfo_all_blocks=1 00:29:20.998 --rc geninfo_unexecuted_blocks=1 00:29:20.998 00:29:20.998 ' 00:29:20.998 13:50:18 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:29:20.998 13:50:18 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:29:20.998 13:50:18 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:29:20.998 13:50:18 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:20.998 13:50:18 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:20.998 13:50:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:20.998 ************************************ 00:29:20.998 START TEST skip_rpc 00:29:20.998 ************************************ 00:29:20.998 13:50:18 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:29:20.998 13:50:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57138 00:29:20.998 13:50:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:29:20.998 13:50:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:29:20.998 13:50:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:29:21.257 [2024-11-20 13:50:18.358287] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:29:21.257 [2024-11-20 13:50:18.358370] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57138 ] 00:29:21.257 [2024-11-20 13:50:18.508223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:21.257 [2024-11-20 13:50:18.564901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:21.516 [2024-11-20 13:50:18.623501] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:29:26.789 13:50:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:29:26.789 13:50:23 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:29:26.789 13:50:23 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:29:26.789 13:50:23 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:26.789 13:50:23 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:26.789 13:50:23 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:26.789 13:50:23 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:26.789 13:50:23 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:29:26.789 13:50:23 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.789 13:50:23 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:26.789 13:50:23 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:26.789 13:50:23 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:29:26.789 13:50:23 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:26.789 13:50:23 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:26.789 13:50:23 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:26.789 13:50:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:29:26.789 13:50:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57138 00:29:26.789 13:50:23 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57138 ']' 00:29:26.789 13:50:23 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57138 00:29:26.789 13:50:23 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:29:26.789 13:50:23 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:26.789 13:50:23 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57138 00:29:26.789 13:50:23 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:26.789 13:50:23 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:26.789 13:50:23 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57138' 00:29:26.789 killing process with pid 57138 00:29:26.789 13:50:23 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57138 00:29:26.789 13:50:23 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57138 00:29:26.789 00:29:26.789 real 0m5.385s 00:29:26.789 user 0m5.051s 00:29:26.789 sys 0m0.260s 00:29:26.789 13:50:23 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:26.789 13:50:23 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:26.789 ************************************ 00:29:26.789 END TEST skip_rpc 00:29:26.789 ************************************ 00:29:26.789 13:50:23 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:29:26.789 13:50:23 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:26.789 13:50:23 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:26.789 13:50:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:26.789 ************************************ 00:29:26.789 START TEST skip_rpc_with_json 00:29:26.789 ************************************ 00:29:26.789 13:50:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:29:26.789 13:50:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:29:26.789 13:50:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57224 00:29:26.789 13:50:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:29:26.789 13:50:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:29:26.789 13:50:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57224 00:29:26.789 13:50:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57224 ']' 00:29:26.789 13:50:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:26.790 13:50:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:26.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:26.790 13:50:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:26.790 13:50:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:26.790 13:50:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:29:26.790 [2024-11-20 13:50:23.797018] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:29:26.790 [2024-11-20 13:50:23.797492] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57224 ] 00:29:26.790 [2024-11-20 13:50:23.946920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:26.790 [2024-11-20 13:50:24.005074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:26.790 [2024-11-20 13:50:24.067106] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:29:27.726 13:50:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:27.726 13:50:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:29:27.726 13:50:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:29:27.726 13:50:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.726 13:50:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:29:27.726 [2024-11-20 13:50:24.750466] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:29:27.726 request: 00:29:27.726 { 00:29:27.726 "trtype": "tcp", 00:29:27.726 "method": "nvmf_get_transports", 00:29:27.726 "req_id": 1 00:29:27.726 } 00:29:27.726 Got JSON-RPC error response 00:29:27.726 response: 00:29:27.726 { 00:29:27.726 "code": -19, 00:29:27.726 "message": "No such device" 00:29:27.726 } 00:29:27.726 13:50:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:27.726 13:50:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:29:27.726 13:50:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.726 13:50:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:29:27.726 [2024-11-20 13:50:24.758562] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:27.726 13:50:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.726 13:50:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:29:27.726 13:50:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.726 13:50:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:29:27.726 13:50:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.726 13:50:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:29:27.726 { 00:29:27.726 "subsystems": [ 00:29:27.726 { 00:29:27.726 "subsystem": "fsdev", 00:29:27.726 "config": [ 00:29:27.726 { 00:29:27.726 "method": "fsdev_set_opts", 00:29:27.726 "params": { 00:29:27.726 "fsdev_io_pool_size": 65535, 00:29:27.726 "fsdev_io_cache_size": 256 00:29:27.726 } 00:29:27.726 } 00:29:27.726 ] 00:29:27.726 }, 00:29:27.726 { 00:29:27.726 "subsystem": "keyring", 00:29:27.726 "config": [] 00:29:27.726 }, 00:29:27.726 { 00:29:27.726 "subsystem": "iobuf", 00:29:27.726 "config": [ 00:29:27.726 { 00:29:27.726 "method": "iobuf_set_options", 00:29:27.726 "params": { 00:29:27.726 "small_pool_count": 8192, 00:29:27.726 "large_pool_count": 1024, 00:29:27.727 "small_bufsize": 8192, 00:29:27.727 "large_bufsize": 135168, 00:29:27.727 "enable_numa": false 00:29:27.727 } 00:29:27.727 } 00:29:27.727 ] 00:29:27.727 }, 00:29:27.727 { 00:29:27.727 "subsystem": "sock", 00:29:27.727 "config": [ 00:29:27.727 { 00:29:27.727 "method": "sock_set_default_impl", 00:29:27.727 "params": { 00:29:27.727 "impl_name": "uring" 00:29:27.727 } 00:29:27.727 }, 00:29:27.727 { 00:29:27.727 "method": "sock_impl_set_options", 00:29:27.727 "params": { 00:29:27.727 "impl_name": "ssl", 00:29:27.727 "recv_buf_size": 4096, 00:29:27.727 "send_buf_size": 4096, 00:29:27.727 "enable_recv_pipe": true, 00:29:27.727 "enable_quickack": false, 00:29:27.727 "enable_placement_id": 0, 00:29:27.727 "enable_zerocopy_send_server": true, 00:29:27.727 "enable_zerocopy_send_client": false, 00:29:27.727 "zerocopy_threshold": 0, 00:29:27.727 "tls_version": 0, 00:29:27.727 "enable_ktls": false 00:29:27.727 } 00:29:27.727 }, 00:29:27.727 { 00:29:27.727 "method": "sock_impl_set_options", 00:29:27.727 "params": { 00:29:27.727 "impl_name": "posix", 00:29:27.727 "recv_buf_size": 2097152, 00:29:27.727 "send_buf_size": 2097152, 00:29:27.727 "enable_recv_pipe": true, 00:29:27.727 "enable_quickack": false, 00:29:27.727 "enable_placement_id": 0, 00:29:27.727 "enable_zerocopy_send_server": true, 00:29:27.727 "enable_zerocopy_send_client": false, 00:29:27.727 "zerocopy_threshold": 0, 00:29:27.727 "tls_version": 0, 00:29:27.727 "enable_ktls": false 00:29:27.727 } 00:29:27.727 }, 00:29:27.727 { 00:29:27.727 "method": "sock_impl_set_options", 00:29:27.727 "params": { 00:29:27.727 "impl_name": "uring", 00:29:27.727 "recv_buf_size": 2097152, 00:29:27.727 "send_buf_size": 2097152, 00:29:27.727 "enable_recv_pipe": true, 00:29:27.727 "enable_quickack": false, 00:29:27.727 "enable_placement_id": 0, 00:29:27.727 "enable_zerocopy_send_server": false, 00:29:27.727 "enable_zerocopy_send_client": false, 00:29:27.727 "zerocopy_threshold": 0, 00:29:27.727 "tls_version": 0, 00:29:27.727 "enable_ktls": false 00:29:27.727 } 00:29:27.727 } 00:29:27.727 ] 00:29:27.727 }, 00:29:27.727 { 00:29:27.727 "subsystem": "vmd", 00:29:27.727 "config": [] 00:29:27.727 }, 00:29:27.727 { 00:29:27.727 "subsystem": "accel", 00:29:27.727 "config": [ 00:29:27.727 { 00:29:27.727 "method": "accel_set_options", 00:29:27.727 "params": { 00:29:27.727 "small_cache_size": 128, 00:29:27.727 "large_cache_size": 16, 00:29:27.727 "task_count": 2048, 00:29:27.727 "sequence_count": 2048, 00:29:27.727 "buf_count": 2048 00:29:27.727 } 00:29:27.727 } 00:29:27.727 ] 00:29:27.727 }, 00:29:27.727 { 00:29:27.727 "subsystem": "bdev", 00:29:27.727 "config": [ 00:29:27.727 { 00:29:27.727 "method": "bdev_set_options", 00:29:27.727 "params": { 00:29:27.727 "bdev_io_pool_size": 65535, 00:29:27.727 "bdev_io_cache_size": 256, 00:29:27.727 "bdev_auto_examine": true, 00:29:27.727 "iobuf_small_cache_size": 128, 00:29:27.727 "iobuf_large_cache_size": 16 00:29:27.727 } 00:29:27.727 }, 00:29:27.727 { 00:29:27.727 "method": "bdev_raid_set_options", 00:29:27.727 "params": { 00:29:27.727 "process_window_size_kb": 1024, 00:29:27.727 "process_max_bandwidth_mb_sec": 0 00:29:27.727 } 00:29:27.727 }, 00:29:27.727 { 00:29:27.727 "method": "bdev_iscsi_set_options", 00:29:27.727 "params": { 00:29:27.727 "timeout_sec": 30 00:29:27.727 } 00:29:27.727 }, 00:29:27.727 { 00:29:27.727 "method": "bdev_nvme_set_options", 00:29:27.727 "params": { 00:29:27.727 "action_on_timeout": "none", 00:29:27.727 "timeout_us": 0, 00:29:27.727 "timeout_admin_us": 0, 00:29:27.727 "keep_alive_timeout_ms": 10000, 00:29:27.727 "arbitration_burst": 0, 00:29:27.727 "low_priority_weight": 0, 00:29:27.727 "medium_priority_weight": 0, 00:29:27.727 "high_priority_weight": 0, 00:29:27.727 "nvme_adminq_poll_period_us": 10000, 00:29:27.727 "nvme_ioq_poll_period_us": 0, 00:29:27.727 "io_queue_requests": 0, 00:29:27.727 "delay_cmd_submit": true, 00:29:27.727 "transport_retry_count": 4, 00:29:27.727 "bdev_retry_count": 3, 00:29:27.727 "transport_ack_timeout": 0, 00:29:27.727 "ctrlr_loss_timeout_sec": 0, 00:29:27.727 "reconnect_delay_sec": 0, 00:29:27.727 "fast_io_fail_timeout_sec": 0, 00:29:27.727 "disable_auto_failback": false, 00:29:27.727 "generate_uuids": false, 00:29:27.727 "transport_tos": 0, 00:29:27.727 "nvme_error_stat": false, 00:29:27.727 "rdma_srq_size": 0, 00:29:27.727 "io_path_stat": false, 00:29:27.727 "allow_accel_sequence": false, 00:29:27.727 "rdma_max_cq_size": 0, 00:29:27.727 "rdma_cm_event_timeout_ms": 0, 00:29:27.727 "dhchap_digests": [ 00:29:27.727 "sha256", 00:29:27.727 "sha384", 00:29:27.727 "sha512" 00:29:27.727 ], 00:29:27.727 "dhchap_dhgroups": [ 00:29:27.727 "null", 00:29:27.727 "ffdhe2048", 00:29:27.727 "ffdhe3072", 00:29:27.727 "ffdhe4096", 00:29:27.727 "ffdhe6144", 00:29:27.727 "ffdhe8192" 00:29:27.727 ] 00:29:27.727 } 00:29:27.727 }, 00:29:27.727 { 00:29:27.727 "method": "bdev_nvme_set_hotplug", 00:29:27.727 "params": { 00:29:27.727 "period_us": 100000, 00:29:27.727 "enable": false 00:29:27.727 } 00:29:27.727 }, 00:29:27.727 { 00:29:27.727 "method": "bdev_wait_for_examine" 00:29:27.727 } 00:29:27.727 ] 00:29:27.727 }, 00:29:27.727 { 00:29:27.727 "subsystem": "scsi", 00:29:27.727 "config": null 00:29:27.727 }, 00:29:27.727 { 00:29:27.727 "subsystem": "scheduler", 00:29:27.727 "config": [ 00:29:27.727 { 00:29:27.727 "method": "framework_set_scheduler", 00:29:27.727 "params": { 00:29:27.727 "name": "static" 00:29:27.727 } 00:29:27.727 } 00:29:27.727 ] 00:29:27.727 }, 00:29:27.727 { 00:29:27.727 "subsystem": "vhost_scsi", 00:29:27.727 "config": [] 00:29:27.727 }, 00:29:27.727 { 00:29:27.727 "subsystem": "vhost_blk", 00:29:27.727 "config": [] 00:29:27.727 }, 00:29:27.727 { 00:29:27.727 "subsystem": "ublk", 00:29:27.727 "config": [] 00:29:27.727 }, 00:29:27.727 { 00:29:27.727 "subsystem": "nbd", 00:29:27.727 "config": [] 00:29:27.727 }, 00:29:27.727 { 00:29:27.727 "subsystem": "nvmf", 00:29:27.727 "config": [ 00:29:27.727 { 00:29:27.727 "method": "nvmf_set_config", 00:29:27.727 "params": { 00:29:27.727 "discovery_filter": "match_any", 00:29:27.727 "admin_cmd_passthru": { 00:29:27.727 "identify_ctrlr": false 00:29:27.727 }, 00:29:27.727 "dhchap_digests": [ 00:29:27.727 "sha256", 00:29:27.727 "sha384", 00:29:27.727 "sha512" 00:29:27.727 ], 00:29:27.727 "dhchap_dhgroups": [ 00:29:27.727 "null", 00:29:27.727 "ffdhe2048", 00:29:27.727 "ffdhe3072", 00:29:27.727 "ffdhe4096", 00:29:27.727 "ffdhe6144", 00:29:27.727 "ffdhe8192" 00:29:27.727 ] 00:29:27.727 } 00:29:27.727 }, 00:29:27.727 { 00:29:27.727 "method": "nvmf_set_max_subsystems", 00:29:27.727 "params": { 00:29:27.727 "max_subsystems": 1024 00:29:27.727 } 00:29:27.727 }, 00:29:27.727 { 00:29:27.727 "method": "nvmf_set_crdt", 00:29:27.727 "params": { 00:29:27.727 "crdt1": 0, 00:29:27.727 "crdt2": 0, 00:29:27.727 "crdt3": 0 00:29:27.727 } 00:29:27.727 }, 00:29:27.727 { 00:29:27.727 "method": "nvmf_create_transport", 00:29:27.727 "params": { 00:29:27.727 "trtype": "TCP", 00:29:27.727 "max_queue_depth": 128, 00:29:27.727 "max_io_qpairs_per_ctrlr": 127, 00:29:27.727 "in_capsule_data_size": 4096, 00:29:27.727 "max_io_size": 131072, 00:29:27.727 "io_unit_size": 131072, 00:29:27.727 "max_aq_depth": 128, 00:29:27.727 "num_shared_buffers": 511, 00:29:27.727 "buf_cache_size": 4294967295, 00:29:27.727 "dif_insert_or_strip": false, 00:29:27.727 "zcopy": false, 00:29:27.727 "c2h_success": true, 00:29:27.727 "sock_priority": 0, 00:29:27.727 "abort_timeout_sec": 1, 00:29:27.727 "ack_timeout": 0, 00:29:27.727 "data_wr_pool_size": 0 00:29:27.727 } 00:29:27.727 } 00:29:27.727 ] 00:29:27.727 }, 00:29:27.727 { 00:29:27.727 "subsystem": "iscsi", 00:29:27.727 "config": [ 00:29:27.727 { 00:29:27.727 "method": "iscsi_set_options", 00:29:27.727 "params": { 00:29:27.727 "node_base": "iqn.2016-06.io.spdk", 00:29:27.727 "max_sessions": 128, 00:29:27.727 "max_connections_per_session": 2, 00:29:27.727 "max_queue_depth": 64, 00:29:27.727 "default_time2wait": 2, 00:29:27.727 "default_time2retain": 20, 00:29:27.727 "first_burst_length": 8192, 00:29:27.727 "immediate_data": true, 00:29:27.727 "allow_duplicated_isid": false, 00:29:27.727 "error_recovery_level": 0, 00:29:27.727 "nop_timeout": 60, 00:29:27.727 "nop_in_interval": 30, 00:29:27.727 "disable_chap": false, 00:29:27.727 "require_chap": false, 00:29:27.727 "mutual_chap": false, 00:29:27.727 "chap_group": 0, 00:29:27.727 "max_large_datain_per_connection": 64, 00:29:27.728 "max_r2t_per_connection": 4, 00:29:27.728 "pdu_pool_size": 36864, 00:29:27.728 "immediate_data_pool_size": 16384, 00:29:27.728 "data_out_pool_size": 2048 00:29:27.728 } 00:29:27.728 } 00:29:27.728 ] 00:29:27.728 } 00:29:27.728 ] 00:29:27.728 } 00:29:27.728 13:50:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:29:27.728 13:50:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57224 00:29:27.728 13:50:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57224 ']' 00:29:27.728 13:50:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57224 00:29:27.728 13:50:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:29:27.728 13:50:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:27.728 13:50:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57224 00:29:27.728 13:50:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:27.728 13:50:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:27.728 killing process with pid 57224 00:29:27.728 13:50:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57224' 00:29:27.728 13:50:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57224 00:29:27.728 13:50:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57224 00:29:27.986 13:50:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:29:27.986 13:50:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57246 00:29:27.986 13:50:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:29:33.248 13:50:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57246 00:29:33.248 13:50:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57246 ']' 00:29:33.248 13:50:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57246 00:29:33.248 13:50:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:29:33.248 13:50:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:33.248 13:50:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57246 00:29:33.248 13:50:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:33.248 13:50:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:33.248 killing process with pid 57246 00:29:33.248 13:50:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57246' 00:29:33.248 13:50:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57246 00:29:33.248 13:50:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57246 00:29:33.507 13:50:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:29:33.507 13:50:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:29:33.507 00:29:33.507 real 0m6.935s 00:29:33.507 user 0m6.664s 00:29:33.507 sys 0m0.635s 00:29:33.507 13:50:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:33.507 13:50:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:29:33.507 ************************************ 00:29:33.507 END TEST skip_rpc_with_json 00:29:33.507 ************************************ 00:29:33.507 13:50:30 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:29:33.507 13:50:30 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:33.507 13:50:30 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:33.507 13:50:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:33.507 ************************************ 00:29:33.507 START TEST skip_rpc_with_delay 00:29:33.507 ************************************ 00:29:33.507 13:50:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:29:33.507 13:50:30 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:29:33.507 13:50:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:29:33.507 13:50:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:29:33.507 13:50:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:33.507 13:50:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:33.507 13:50:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:33.507 13:50:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:33.507 13:50:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:33.507 13:50:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:33.507 13:50:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:33.507 13:50:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:29:33.507 13:50:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:29:33.507 [2024-11-20 13:50:30.801091] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:29:33.507 13:50:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:29:33.507 13:50:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:33.507 13:50:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:33.507 13:50:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:33.507 00:29:33.507 real 0m0.086s 00:29:33.507 user 0m0.048s 00:29:33.507 sys 0m0.037s 00:29:33.507 13:50:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:33.507 13:50:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:29:33.507 ************************************ 00:29:33.507 END TEST skip_rpc_with_delay 00:29:33.507 ************************************ 00:29:33.767 13:50:30 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:29:33.767 13:50:30 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:29:33.767 13:50:30 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:29:33.767 13:50:30 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:33.767 13:50:30 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:33.767 13:50:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:33.767 ************************************ 00:29:33.767 START TEST exit_on_failed_rpc_init 00:29:33.767 ************************************ 00:29:33.767 13:50:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:29:33.767 13:50:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57361 00:29:33.767 13:50:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:29:33.767 13:50:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57361 00:29:33.767 13:50:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57361 ']' 00:29:33.767 13:50:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:33.767 13:50:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:33.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:33.767 13:50:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:33.767 13:50:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:33.767 13:50:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:29:33.767 [2024-11-20 13:50:30.950558] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:29:33.767 [2024-11-20 13:50:30.950630] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57361 ] 00:29:34.026 [2024-11-20 13:50:31.100858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:34.026 [2024-11-20 13:50:31.158259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:34.026 [2024-11-20 13:50:31.218068] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:29:34.609 13:50:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:34.609 13:50:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:29:34.609 13:50:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:29:34.609 13:50:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:29:34.609 13:50:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:29:34.609 13:50:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:29:34.609 13:50:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:34.610 13:50:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:34.610 13:50:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:34.610 13:50:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:34.610 13:50:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:34.610 13:50:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:34.610 13:50:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:34.610 13:50:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:29:34.610 13:50:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:29:34.868 [2024-11-20 13:50:31.964972] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:29:34.868 [2024-11-20 13:50:31.965040] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57379 ] 00:29:34.868 [2024-11-20 13:50:32.113802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:34.868 [2024-11-20 13:50:32.169203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:34.868 [2024-11-20 13:50:32.169279] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:29:34.868 [2024-11-20 13:50:32.169289] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:29:34.869 [2024-11-20 13:50:32.169295] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:35.128 13:50:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:29:35.128 13:50:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:35.128 13:50:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:29:35.128 13:50:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:29:35.128 13:50:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:29:35.128 13:50:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:35.128 13:50:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:29:35.128 13:50:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57361 00:29:35.128 13:50:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57361 ']' 00:29:35.128 13:50:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57361 00:29:35.128 13:50:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:29:35.128 13:50:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:35.128 13:50:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57361 00:29:35.128 13:50:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:35.128 13:50:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:35.128 13:50:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57361' 00:29:35.128 killing process with pid 57361 00:29:35.128 13:50:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57361 00:29:35.128 13:50:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57361 00:29:35.388 00:29:35.388 real 0m1.705s 00:29:35.388 user 0m1.989s 00:29:35.388 sys 0m0.368s 00:29:35.388 13:50:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:35.388 13:50:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:29:35.388 ************************************ 00:29:35.388 END TEST exit_on_failed_rpc_init 00:29:35.388 ************************************ 00:29:35.388 13:50:32 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:29:35.388 00:29:35.388 real 0m14.590s 00:29:35.388 user 0m13.957s 00:29:35.388 sys 0m1.585s 00:29:35.388 13:50:32 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:35.388 13:50:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:35.388 ************************************ 00:29:35.388 END TEST skip_rpc 00:29:35.388 ************************************ 00:29:35.647 13:50:32 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:29:35.647 13:50:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:35.647 13:50:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:35.647 13:50:32 -- common/autotest_common.sh@10 -- # set +x 00:29:35.647 ************************************ 00:29:35.647 START TEST rpc_client 00:29:35.647 ************************************ 00:29:35.647 13:50:32 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:29:35.647 * Looking for test storage... 00:29:35.647 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:29:35.647 13:50:32 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:35.647 13:50:32 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:29:35.647 13:50:32 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:35.647 13:50:32 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:35.647 13:50:32 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:35.647 13:50:32 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:35.647 13:50:32 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:35.647 13:50:32 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:29:35.647 13:50:32 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:29:35.647 13:50:32 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:29:35.647 13:50:32 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:29:35.647 13:50:32 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:29:35.647 13:50:32 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:29:35.647 13:50:32 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:29:35.647 13:50:32 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:35.647 13:50:32 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:29:35.647 13:50:32 rpc_client -- scripts/common.sh@345 -- # : 1 00:29:35.647 13:50:32 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:35.647 13:50:32 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:35.647 13:50:32 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:29:35.647 13:50:32 rpc_client -- scripts/common.sh@353 -- # local d=1 00:29:35.647 13:50:32 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:35.647 13:50:32 rpc_client -- scripts/common.sh@355 -- # echo 1 00:29:35.647 13:50:32 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:29:35.647 13:50:32 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:29:35.647 13:50:32 rpc_client -- scripts/common.sh@353 -- # local d=2 00:29:35.647 13:50:32 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:35.647 13:50:32 rpc_client -- scripts/common.sh@355 -- # echo 2 00:29:35.647 13:50:32 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:29:35.647 13:50:32 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:35.647 13:50:32 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:35.647 13:50:32 rpc_client -- scripts/common.sh@368 -- # return 0 00:29:35.647 13:50:32 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:35.647 13:50:32 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:35.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:35.647 --rc genhtml_branch_coverage=1 00:29:35.647 --rc genhtml_function_coverage=1 00:29:35.647 --rc genhtml_legend=1 00:29:35.647 --rc geninfo_all_blocks=1 00:29:35.647 --rc geninfo_unexecuted_blocks=1 00:29:35.647 00:29:35.647 ' 00:29:35.647 13:50:32 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:35.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:35.647 --rc genhtml_branch_coverage=1 00:29:35.647 --rc genhtml_function_coverage=1 00:29:35.647 --rc genhtml_legend=1 00:29:35.647 --rc geninfo_all_blocks=1 00:29:35.648 --rc geninfo_unexecuted_blocks=1 00:29:35.648 00:29:35.648 ' 00:29:35.648 13:50:32 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:35.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:35.648 --rc genhtml_branch_coverage=1 00:29:35.648 --rc genhtml_function_coverage=1 00:29:35.648 --rc genhtml_legend=1 00:29:35.648 --rc geninfo_all_blocks=1 00:29:35.648 --rc geninfo_unexecuted_blocks=1 00:29:35.648 00:29:35.648 ' 00:29:35.648 13:50:32 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:35.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:35.648 --rc genhtml_branch_coverage=1 00:29:35.648 --rc genhtml_function_coverage=1 00:29:35.648 --rc genhtml_legend=1 00:29:35.648 --rc geninfo_all_blocks=1 00:29:35.648 --rc geninfo_unexecuted_blocks=1 00:29:35.648 00:29:35.648 ' 00:29:35.648 13:50:32 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:29:35.648 OK 00:29:35.648 13:50:32 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:29:35.648 00:29:35.648 real 0m0.239s 00:29:35.648 user 0m0.142s 00:29:35.648 sys 0m0.111s 00:29:35.648 13:50:32 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:35.648 13:50:32 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:29:35.648 ************************************ 00:29:35.648 END TEST rpc_client 00:29:35.648 ************************************ 00:29:35.908 13:50:33 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:29:35.908 13:50:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:35.908 13:50:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:35.908 13:50:33 -- common/autotest_common.sh@10 -- # set +x 00:29:35.908 ************************************ 00:29:35.908 START TEST json_config 00:29:35.908 ************************************ 00:29:35.908 13:50:33 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:29:35.908 13:50:33 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:35.908 13:50:33 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:35.908 13:50:33 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:29:35.908 13:50:33 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:35.908 13:50:33 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:35.908 13:50:33 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:35.908 13:50:33 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:35.908 13:50:33 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:29:35.908 13:50:33 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:29:35.908 13:50:33 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:29:35.908 13:50:33 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:29:35.908 13:50:33 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:29:35.908 13:50:33 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:29:35.908 13:50:33 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:29:35.908 13:50:33 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:35.908 13:50:33 json_config -- scripts/common.sh@344 -- # case "$op" in 00:29:35.908 13:50:33 json_config -- scripts/common.sh@345 -- # : 1 00:29:35.908 13:50:33 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:35.908 13:50:33 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:35.908 13:50:33 json_config -- scripts/common.sh@365 -- # decimal 1 00:29:35.908 13:50:33 json_config -- scripts/common.sh@353 -- # local d=1 00:29:35.908 13:50:33 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:35.908 13:50:33 json_config -- scripts/common.sh@355 -- # echo 1 00:29:35.908 13:50:33 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:29:35.908 13:50:33 json_config -- scripts/common.sh@366 -- # decimal 2 00:29:35.908 13:50:33 json_config -- scripts/common.sh@353 -- # local d=2 00:29:35.908 13:50:33 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:35.908 13:50:33 json_config -- scripts/common.sh@355 -- # echo 2 00:29:35.908 13:50:33 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:29:35.908 13:50:33 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:35.908 13:50:33 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:35.908 13:50:33 json_config -- scripts/common.sh@368 -- # return 0 00:29:35.908 13:50:33 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:35.908 13:50:33 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:35.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:35.908 --rc genhtml_branch_coverage=1 00:29:35.908 --rc genhtml_function_coverage=1 00:29:35.908 --rc genhtml_legend=1 00:29:35.908 --rc geninfo_all_blocks=1 00:29:35.908 --rc geninfo_unexecuted_blocks=1 00:29:35.908 00:29:35.908 ' 00:29:35.908 13:50:33 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:35.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:35.908 --rc genhtml_branch_coverage=1 00:29:35.908 --rc genhtml_function_coverage=1 00:29:35.908 --rc genhtml_legend=1 00:29:35.908 --rc geninfo_all_blocks=1 00:29:35.908 --rc geninfo_unexecuted_blocks=1 00:29:35.908 00:29:35.908 ' 00:29:35.908 13:50:33 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:35.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:35.908 --rc genhtml_branch_coverage=1 00:29:35.908 --rc genhtml_function_coverage=1 00:29:35.908 --rc genhtml_legend=1 00:29:35.908 --rc geninfo_all_blocks=1 00:29:35.908 --rc geninfo_unexecuted_blocks=1 00:29:35.908 00:29:35.908 ' 00:29:35.908 13:50:33 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:35.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:35.908 --rc genhtml_branch_coverage=1 00:29:35.908 --rc genhtml_function_coverage=1 00:29:35.908 --rc genhtml_legend=1 00:29:35.908 --rc geninfo_all_blocks=1 00:29:35.908 --rc geninfo_unexecuted_blocks=1 00:29:35.908 00:29:35.908 ' 00:29:35.908 13:50:33 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:35.908 13:50:33 json_config -- nvmf/common.sh@7 -- # uname -s 00:29:35.908 13:50:33 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:35.908 13:50:33 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:35.908 13:50:33 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:35.908 13:50:33 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:35.908 13:50:33 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:35.908 13:50:33 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:35.908 13:50:33 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:35.908 13:50:33 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:35.908 13:50:33 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:35.908 13:50:33 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:35.908 13:50:33 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:29:35.908 13:50:33 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=105ec898-1662-46bd-85be-b241e399edb9 00:29:35.908 13:50:33 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:35.908 13:50:33 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:35.908 13:50:33 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:29:35.908 13:50:33 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:35.908 13:50:33 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:35.908 13:50:33 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:29:36.167 13:50:33 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:36.167 13:50:33 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:36.167 13:50:33 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:36.167 13:50:33 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.167 13:50:33 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.168 13:50:33 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.168 13:50:33 json_config -- paths/export.sh@5 -- # export PATH 00:29:36.168 13:50:33 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.168 13:50:33 json_config -- nvmf/common.sh@51 -- # : 0 00:29:36.168 13:50:33 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:36.168 13:50:33 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:36.168 13:50:33 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:36.168 13:50:33 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:36.168 13:50:33 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:36.168 13:50:33 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:36.168 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:36.168 13:50:33 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:36.168 13:50:33 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:36.168 13:50:33 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:36.168 13:50:33 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:29:36.168 13:50:33 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:29:36.168 13:50:33 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:29:36.168 13:50:33 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:29:36.168 13:50:33 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:29:36.168 13:50:33 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:29:36.168 13:50:33 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:29:36.168 13:50:33 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:29:36.168 13:50:33 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:29:36.168 13:50:33 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:29:36.168 13:50:33 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:29:36.168 13:50:33 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:29:36.168 13:50:33 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:29:36.168 13:50:33 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:29:36.168 13:50:33 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:29:36.168 INFO: JSON configuration test init 00:29:36.168 13:50:33 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:29:36.168 13:50:33 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:29:36.168 13:50:33 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:29:36.168 13:50:33 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:36.168 13:50:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:29:36.168 13:50:33 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:29:36.168 13:50:33 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:36.168 13:50:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:29:36.168 13:50:33 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:29:36.168 13:50:33 json_config -- json_config/common.sh@9 -- # local app=target 00:29:36.168 13:50:33 json_config -- json_config/common.sh@10 -- # shift 00:29:36.168 13:50:33 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:29:36.168 13:50:33 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:29:36.168 13:50:33 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:29:36.168 13:50:33 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:29:36.168 13:50:33 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:29:36.168 13:50:33 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57513 00:29:36.168 13:50:33 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:29:36.168 13:50:33 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:29:36.168 Waiting for target to run... 00:29:36.168 13:50:33 json_config -- json_config/common.sh@25 -- # waitforlisten 57513 /var/tmp/spdk_tgt.sock 00:29:36.168 13:50:33 json_config -- common/autotest_common.sh@835 -- # '[' -z 57513 ']' 00:29:36.168 13:50:33 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:29:36.168 13:50:33 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:36.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:29:36.168 13:50:33 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:29:36.168 13:50:33 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:36.168 13:50:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:29:36.168 [2024-11-20 13:50:33.326860] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:29:36.168 [2024-11-20 13:50:33.326966] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57513 ] 00:29:36.737 [2024-11-20 13:50:33.897420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:36.737 [2024-11-20 13:50:33.944139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:36.996 13:50:34 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:36.996 13:50:34 json_config -- common/autotest_common.sh@868 -- # return 0 00:29:36.996 00:29:36.996 13:50:34 json_config -- json_config/common.sh@26 -- # echo '' 00:29:36.996 13:50:34 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:29:36.996 13:50:34 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:29:36.996 13:50:34 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:36.996 13:50:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:29:36.996 13:50:34 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:29:36.996 13:50:34 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:29:36.996 13:50:34 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:36.996 13:50:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:29:37.256 13:50:34 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:29:37.256 13:50:34 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:29:37.256 13:50:34 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:29:37.518 [2024-11-20 13:50:34.589780] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:29:37.518 13:50:34 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:29:37.518 13:50:34 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:29:37.518 13:50:34 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:37.518 13:50:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:29:37.518 13:50:34 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:29:37.518 13:50:34 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:29:37.518 13:50:34 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:29:37.518 13:50:34 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:29:37.518 13:50:34 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:29:37.518 13:50:34 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:29:37.518 13:50:34 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:29:37.518 13:50:34 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:29:37.777 13:50:35 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:29:37.777 13:50:35 json_config -- json_config/json_config.sh@51 -- # local get_types 00:29:37.777 13:50:35 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:29:37.777 13:50:35 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:29:37.777 13:50:35 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:29:37.777 13:50:35 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:29:37.777 13:50:35 json_config -- json_config/json_config.sh@54 -- # sort 00:29:37.777 13:50:35 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:29:37.777 13:50:35 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:29:37.777 13:50:35 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:29:37.777 13:50:35 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:37.777 13:50:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:29:38.036 13:50:35 json_config -- json_config/json_config.sh@62 -- # return 0 00:29:38.036 13:50:35 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:29:38.036 13:50:35 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:29:38.036 13:50:35 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:29:38.036 13:50:35 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:29:38.036 13:50:35 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:29:38.036 13:50:35 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:29:38.036 13:50:35 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:38.036 13:50:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:29:38.036 13:50:35 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:29:38.036 13:50:35 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:29:38.036 13:50:35 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:29:38.036 13:50:35 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:29:38.036 13:50:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:29:38.295 MallocForNvmf0 00:29:38.295 13:50:35 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:29:38.295 13:50:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:29:38.554 MallocForNvmf1 00:29:38.554 13:50:35 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:29:38.554 13:50:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:29:38.554 [2024-11-20 13:50:35.854249] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:38.554 13:50:35 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:38.554 13:50:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:38.814 13:50:36 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:29:38.814 13:50:36 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:29:39.072 13:50:36 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:29:39.072 13:50:36 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:29:39.329 13:50:36 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:29:39.329 13:50:36 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:29:39.587 [2024-11-20 13:50:36.757084] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:39.587 13:50:36 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:29:39.587 13:50:36 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:39.587 13:50:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:29:39.587 13:50:36 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:29:39.587 13:50:36 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:39.587 13:50:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:29:39.587 13:50:36 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:29:39.587 13:50:36 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:29:39.587 13:50:36 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:29:39.846 MallocBdevForConfigChangeCheck 00:29:39.846 13:50:37 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:29:39.846 13:50:37 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:39.846 13:50:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:29:40.104 13:50:37 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:29:40.104 13:50:37 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:29:40.411 INFO: shutting down applications... 00:29:40.411 13:50:37 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:29:40.411 13:50:37 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:29:40.411 13:50:37 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:29:40.411 13:50:37 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:29:40.411 13:50:37 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:29:40.671 Calling clear_iscsi_subsystem 00:29:40.671 Calling clear_nvmf_subsystem 00:29:40.671 Calling clear_nbd_subsystem 00:29:40.671 Calling clear_ublk_subsystem 00:29:40.671 Calling clear_vhost_blk_subsystem 00:29:40.671 Calling clear_vhost_scsi_subsystem 00:29:40.671 Calling clear_bdev_subsystem 00:29:40.671 13:50:37 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:29:40.671 13:50:37 json_config -- json_config/json_config.sh@350 -- # count=100 00:29:40.671 13:50:37 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:29:40.671 13:50:37 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:29:40.671 13:50:37 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:29:40.671 13:50:37 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:29:41.239 13:50:38 json_config -- json_config/json_config.sh@352 -- # break 00:29:41.239 13:50:38 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:29:41.239 13:50:38 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:29:41.239 13:50:38 json_config -- json_config/common.sh@31 -- # local app=target 00:29:41.239 13:50:38 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:29:41.239 13:50:38 json_config -- json_config/common.sh@35 -- # [[ -n 57513 ]] 00:29:41.239 13:50:38 json_config -- json_config/common.sh@38 -- # kill -SIGINT 57513 00:29:41.239 13:50:38 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:29:41.239 13:50:38 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:29:41.239 13:50:38 json_config -- json_config/common.sh@41 -- # kill -0 57513 00:29:41.239 13:50:38 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:29:41.806 13:50:38 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:29:41.806 13:50:38 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:29:41.806 13:50:38 json_config -- json_config/common.sh@41 -- # kill -0 57513 00:29:41.806 13:50:38 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:29:41.806 13:50:38 json_config -- json_config/common.sh@43 -- # break 00:29:41.806 13:50:38 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:29:41.806 SPDK target shutdown done 00:29:41.806 13:50:38 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:29:41.806 INFO: relaunching applications... 00:29:41.806 13:50:38 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:29:41.806 13:50:38 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:29:41.806 13:50:38 json_config -- json_config/common.sh@9 -- # local app=target 00:29:41.806 13:50:38 json_config -- json_config/common.sh@10 -- # shift 00:29:41.806 13:50:38 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:29:41.806 13:50:38 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:29:41.806 13:50:38 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:29:41.806 13:50:38 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:29:41.807 13:50:38 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:29:41.807 13:50:38 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57707 00:29:41.807 13:50:38 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:29:41.807 13:50:38 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:29:41.807 Waiting for target to run... 00:29:41.807 13:50:38 json_config -- json_config/common.sh@25 -- # waitforlisten 57707 /var/tmp/spdk_tgt.sock 00:29:41.807 13:50:38 json_config -- common/autotest_common.sh@835 -- # '[' -z 57707 ']' 00:29:41.807 13:50:38 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:29:41.807 13:50:38 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:41.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:29:41.807 13:50:38 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:29:41.807 13:50:38 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:41.807 13:50:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:29:41.807 [2024-11-20 13:50:38.938235] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:29:41.807 [2024-11-20 13:50:38.938327] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57707 ] 00:29:42.065 [2024-11-20 13:50:39.308454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:42.065 [2024-11-20 13:50:39.355214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:42.324 [2024-11-20 13:50:39.491942] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:29:42.583 [2024-11-20 13:50:39.701414] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:42.583 [2024-11-20 13:50:39.733421] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:42.583 13:50:39 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:42.583 13:50:39 json_config -- common/autotest_common.sh@868 -- # return 0 00:29:42.583 00:29:42.583 13:50:39 json_config -- json_config/common.sh@26 -- # echo '' 00:29:42.583 13:50:39 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:29:42.583 INFO: Checking if target configuration is the same... 00:29:42.583 13:50:39 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:29:42.583 13:50:39 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:29:42.583 13:50:39 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:29:42.583 13:50:39 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:29:42.583 + '[' 2 -ne 2 ']' 00:29:42.583 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:29:42.583 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:29:42.583 + rootdir=/home/vagrant/spdk_repo/spdk 00:29:42.583 +++ basename /dev/fd/62 00:29:42.583 ++ mktemp /tmp/62.XXX 00:29:42.583 + tmp_file_1=/tmp/62.cql 00:29:42.583 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:29:42.583 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:29:42.583 + tmp_file_2=/tmp/spdk_tgt_config.json.d23 00:29:42.583 + ret=0 00:29:42.583 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:29:43.150 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:29:43.150 + diff -u /tmp/62.cql /tmp/spdk_tgt_config.json.d23 00:29:43.150 + echo 'INFO: JSON config files are the same' 00:29:43.150 INFO: JSON config files are the same 00:29:43.150 + rm /tmp/62.cql /tmp/spdk_tgt_config.json.d23 00:29:43.150 + exit 0 00:29:43.150 13:50:40 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:29:43.150 INFO: changing configuration and checking if this can be detected... 00:29:43.150 13:50:40 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:29:43.150 13:50:40 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:29:43.150 13:50:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:29:43.409 13:50:40 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:29:43.409 13:50:40 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:29:43.409 13:50:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:29:43.409 + '[' 2 -ne 2 ']' 00:29:43.409 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:29:43.409 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:29:43.409 + rootdir=/home/vagrant/spdk_repo/spdk 00:29:43.409 +++ basename /dev/fd/62 00:29:43.409 ++ mktemp /tmp/62.XXX 00:29:43.409 + tmp_file_1=/tmp/62.EE1 00:29:43.409 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:29:43.409 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:29:43.409 + tmp_file_2=/tmp/spdk_tgt_config.json.D1W 00:29:43.409 + ret=0 00:29:43.409 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:29:43.977 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:29:43.977 + diff -u /tmp/62.EE1 /tmp/spdk_tgt_config.json.D1W 00:29:43.977 + ret=1 00:29:43.977 + echo '=== Start of file: /tmp/62.EE1 ===' 00:29:43.977 + cat /tmp/62.EE1 00:29:43.977 + echo '=== End of file: /tmp/62.EE1 ===' 00:29:43.977 + echo '' 00:29:43.977 + echo '=== Start of file: /tmp/spdk_tgt_config.json.D1W ===' 00:29:43.977 + cat /tmp/spdk_tgt_config.json.D1W 00:29:43.977 + echo '=== End of file: /tmp/spdk_tgt_config.json.D1W ===' 00:29:43.977 + echo '' 00:29:43.977 + rm /tmp/62.EE1 /tmp/spdk_tgt_config.json.D1W 00:29:43.977 + exit 1 00:29:43.977 INFO: configuration change detected. 00:29:43.977 13:50:41 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:29:43.977 13:50:41 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:29:43.977 13:50:41 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:29:43.977 13:50:41 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:43.977 13:50:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:29:43.977 13:50:41 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:29:43.977 13:50:41 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:29:43.977 13:50:41 json_config -- json_config/json_config.sh@324 -- # [[ -n 57707 ]] 00:29:43.977 13:50:41 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:29:43.977 13:50:41 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:29:43.977 13:50:41 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:43.977 13:50:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:29:43.977 13:50:41 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:29:43.977 13:50:41 json_config -- json_config/json_config.sh@200 -- # uname -s 00:29:43.977 13:50:41 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:29:43.977 13:50:41 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:29:43.977 13:50:41 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:29:43.977 13:50:41 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:29:43.977 13:50:41 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:43.977 13:50:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:29:43.977 13:50:41 json_config -- json_config/json_config.sh@330 -- # killprocess 57707 00:29:43.977 13:50:41 json_config -- common/autotest_common.sh@954 -- # '[' -z 57707 ']' 00:29:43.977 13:50:41 json_config -- common/autotest_common.sh@958 -- # kill -0 57707 00:29:43.977 13:50:41 json_config -- common/autotest_common.sh@959 -- # uname 00:29:43.977 13:50:41 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:43.977 13:50:41 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57707 00:29:43.977 13:50:41 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:43.977 13:50:41 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:43.977 killing process with pid 57707 00:29:43.977 13:50:41 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57707' 00:29:43.977 13:50:41 json_config -- common/autotest_common.sh@973 -- # kill 57707 00:29:43.977 13:50:41 json_config -- common/autotest_common.sh@978 -- # wait 57707 00:29:44.236 13:50:41 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:29:44.236 13:50:41 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:29:44.236 13:50:41 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:44.236 13:50:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:29:44.236 13:50:41 json_config -- json_config/json_config.sh@335 -- # return 0 00:29:44.236 INFO: Success 00:29:44.236 13:50:41 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:29:44.236 00:29:44.236 real 0m8.419s 00:29:44.236 user 0m11.648s 00:29:44.236 sys 0m2.034s 00:29:44.236 ************************************ 00:29:44.236 END TEST json_config 00:29:44.236 ************************************ 00:29:44.236 13:50:41 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:44.236 13:50:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:29:44.236 13:50:41 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:29:44.236 13:50:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:44.236 13:50:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:44.236 13:50:41 -- common/autotest_common.sh@10 -- # set +x 00:29:44.236 ************************************ 00:29:44.236 START TEST json_config_extra_key 00:29:44.236 ************************************ 00:29:44.236 13:50:41 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:29:44.496 13:50:41 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:44.496 13:50:41 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:29:44.496 13:50:41 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:44.496 13:50:41 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:44.496 13:50:41 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:44.496 13:50:41 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:44.496 13:50:41 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:44.496 13:50:41 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:29:44.496 13:50:41 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:29:44.496 13:50:41 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:29:44.496 13:50:41 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:29:44.496 13:50:41 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:29:44.496 13:50:41 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:29:44.496 13:50:41 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:29:44.496 13:50:41 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:44.496 13:50:41 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:29:44.496 13:50:41 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:29:44.496 13:50:41 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:44.496 13:50:41 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:44.496 13:50:41 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:29:44.496 13:50:41 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:29:44.496 13:50:41 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:44.496 13:50:41 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:29:44.496 13:50:41 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:29:44.496 13:50:41 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:29:44.496 13:50:41 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:29:44.496 13:50:41 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:44.496 13:50:41 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:29:44.496 13:50:41 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:29:44.497 13:50:41 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:44.497 13:50:41 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:44.497 13:50:41 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:29:44.497 13:50:41 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:44.497 13:50:41 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:44.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:44.497 --rc genhtml_branch_coverage=1 00:29:44.497 --rc genhtml_function_coverage=1 00:29:44.497 --rc genhtml_legend=1 00:29:44.497 --rc geninfo_all_blocks=1 00:29:44.497 --rc geninfo_unexecuted_blocks=1 00:29:44.497 00:29:44.497 ' 00:29:44.497 13:50:41 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:44.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:44.497 --rc genhtml_branch_coverage=1 00:29:44.497 --rc genhtml_function_coverage=1 00:29:44.497 --rc genhtml_legend=1 00:29:44.497 --rc geninfo_all_blocks=1 00:29:44.497 --rc geninfo_unexecuted_blocks=1 00:29:44.497 00:29:44.497 ' 00:29:44.497 13:50:41 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:44.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:44.497 --rc genhtml_branch_coverage=1 00:29:44.497 --rc genhtml_function_coverage=1 00:29:44.497 --rc genhtml_legend=1 00:29:44.497 --rc geninfo_all_blocks=1 00:29:44.497 --rc geninfo_unexecuted_blocks=1 00:29:44.497 00:29:44.497 ' 00:29:44.497 13:50:41 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:44.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:44.497 --rc genhtml_branch_coverage=1 00:29:44.497 --rc genhtml_function_coverage=1 00:29:44.497 --rc genhtml_legend=1 00:29:44.497 --rc geninfo_all_blocks=1 00:29:44.497 --rc geninfo_unexecuted_blocks=1 00:29:44.497 00:29:44.497 ' 00:29:44.497 13:50:41 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:44.497 13:50:41 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:29:44.497 13:50:41 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:44.497 13:50:41 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:44.497 13:50:41 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:44.497 13:50:41 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:44.497 13:50:41 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:44.497 13:50:41 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:44.497 13:50:41 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:44.497 13:50:41 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:44.497 13:50:41 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:44.497 13:50:41 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:44.497 13:50:41 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:29:44.497 13:50:41 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=105ec898-1662-46bd-85be-b241e399edb9 00:29:44.497 13:50:41 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:44.497 13:50:41 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:44.497 13:50:41 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:29:44.497 13:50:41 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:44.497 13:50:41 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:44.497 13:50:41 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:29:44.497 13:50:41 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:44.497 13:50:41 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:44.497 13:50:41 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:44.497 13:50:41 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.497 13:50:41 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.497 13:50:41 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.497 13:50:41 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:29:44.497 13:50:41 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.497 13:50:41 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:29:44.497 13:50:41 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:44.497 13:50:41 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:44.497 13:50:41 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:44.497 13:50:41 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:44.497 13:50:41 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:44.497 13:50:41 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:44.497 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:44.497 13:50:41 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:44.497 13:50:41 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:44.497 13:50:41 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:44.497 13:50:41 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:29:44.497 13:50:41 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:29:44.497 13:50:41 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:29:44.497 13:50:41 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:29:44.497 13:50:41 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:29:44.497 13:50:41 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:29:44.497 13:50:41 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:29:44.497 13:50:41 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:29:44.497 13:50:41 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:29:44.497 13:50:41 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:29:44.497 13:50:41 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:29:44.497 INFO: launching applications... 00:29:44.497 13:50:41 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:29:44.497 13:50:41 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:29:44.497 13:50:41 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:29:44.497 13:50:41 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:29:44.497 13:50:41 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:29:44.497 13:50:41 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:29:44.497 13:50:41 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:29:44.497 13:50:41 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:29:44.497 13:50:41 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57857 00:29:44.497 13:50:41 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:29:44.497 Waiting for target to run... 00:29:44.497 13:50:41 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57857 /var/tmp/spdk_tgt.sock 00:29:44.497 13:50:41 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57857 ']' 00:29:44.497 13:50:41 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:29:44.498 13:50:41 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:29:44.498 13:50:41 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:44.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:29:44.498 13:50:41 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:29:44.498 13:50:41 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:44.498 13:50:41 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:29:44.498 [2024-11-20 13:50:41.754683] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:29:44.498 [2024-11-20 13:50:41.754797] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57857 ] 00:29:45.064 [2024-11-20 13:50:42.129002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:45.064 [2024-11-20 13:50:42.177059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:45.064 [2024-11-20 13:50:42.208644] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:29:45.633 00:29:45.633 INFO: shutting down applications... 00:29:45.633 13:50:42 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:45.633 13:50:42 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:29:45.633 13:50:42 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:29:45.633 13:50:42 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:29:45.633 13:50:42 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:29:45.634 13:50:42 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:29:45.634 13:50:42 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:29:45.634 13:50:42 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57857 ]] 00:29:45.634 13:50:42 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57857 00:29:45.634 13:50:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:29:45.634 13:50:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:29:45.634 13:50:42 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57857 00:29:45.634 13:50:42 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:29:45.893 13:50:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:29:45.893 13:50:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:29:45.893 13:50:43 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57857 00:29:45.893 13:50:43 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:29:45.893 13:50:43 json_config_extra_key -- json_config/common.sh@43 -- # break 00:29:45.893 13:50:43 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:29:45.893 13:50:43 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:29:45.893 SPDK target shutdown done 00:29:45.893 13:50:43 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:29:45.893 Success 00:29:45.893 00:29:45.893 real 0m1.704s 00:29:45.893 user 0m1.495s 00:29:45.893 sys 0m0.419s 00:29:45.893 13:50:43 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:45.893 13:50:43 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:29:45.893 ************************************ 00:29:45.893 END TEST json_config_extra_key 00:29:45.893 ************************************ 00:29:46.153 13:50:43 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:29:46.153 13:50:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:46.153 13:50:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:46.153 13:50:43 -- common/autotest_common.sh@10 -- # set +x 00:29:46.153 ************************************ 00:29:46.153 START TEST alias_rpc 00:29:46.153 ************************************ 00:29:46.153 13:50:43 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:29:46.153 * Looking for test storage... 00:29:46.153 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:29:46.153 13:50:43 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:46.153 13:50:43 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:46.153 13:50:43 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:29:46.413 13:50:43 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:46.413 13:50:43 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:46.413 13:50:43 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:46.413 13:50:43 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:46.413 13:50:43 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:29:46.413 13:50:43 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:29:46.413 13:50:43 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:29:46.413 13:50:43 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:29:46.413 13:50:43 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:29:46.413 13:50:43 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:29:46.413 13:50:43 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:29:46.413 13:50:43 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:46.413 13:50:43 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:29:46.413 13:50:43 alias_rpc -- scripts/common.sh@345 -- # : 1 00:29:46.413 13:50:43 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:46.414 13:50:43 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:46.414 13:50:43 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:29:46.414 13:50:43 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:29:46.414 13:50:43 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:46.414 13:50:43 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:29:46.414 13:50:43 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:29:46.414 13:50:43 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:29:46.414 13:50:43 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:29:46.414 13:50:43 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:46.414 13:50:43 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:29:46.414 13:50:43 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:29:46.414 13:50:43 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:46.414 13:50:43 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:46.414 13:50:43 alias_rpc -- scripts/common.sh@368 -- # return 0 00:29:46.414 13:50:43 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:46.414 13:50:43 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:46.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:46.414 --rc genhtml_branch_coverage=1 00:29:46.414 --rc genhtml_function_coverage=1 00:29:46.414 --rc genhtml_legend=1 00:29:46.414 --rc geninfo_all_blocks=1 00:29:46.414 --rc geninfo_unexecuted_blocks=1 00:29:46.414 00:29:46.414 ' 00:29:46.414 13:50:43 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:46.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:46.414 --rc genhtml_branch_coverage=1 00:29:46.414 --rc genhtml_function_coverage=1 00:29:46.414 --rc genhtml_legend=1 00:29:46.414 --rc geninfo_all_blocks=1 00:29:46.414 --rc geninfo_unexecuted_blocks=1 00:29:46.414 00:29:46.414 ' 00:29:46.414 13:50:43 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:46.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:46.414 --rc genhtml_branch_coverage=1 00:29:46.414 --rc genhtml_function_coverage=1 00:29:46.414 --rc genhtml_legend=1 00:29:46.414 --rc geninfo_all_blocks=1 00:29:46.414 --rc geninfo_unexecuted_blocks=1 00:29:46.414 00:29:46.414 ' 00:29:46.414 13:50:43 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:46.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:46.414 --rc genhtml_branch_coverage=1 00:29:46.414 --rc genhtml_function_coverage=1 00:29:46.414 --rc genhtml_legend=1 00:29:46.414 --rc geninfo_all_blocks=1 00:29:46.414 --rc geninfo_unexecuted_blocks=1 00:29:46.414 00:29:46.414 ' 00:29:46.414 13:50:43 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:29:46.414 13:50:43 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57935 00:29:46.414 13:50:43 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:46.414 13:50:43 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57935 00:29:46.414 13:50:43 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57935 ']' 00:29:46.414 13:50:43 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:46.414 13:50:43 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:46.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:46.414 13:50:43 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:46.414 13:50:43 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:46.414 13:50:43 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:46.414 [2024-11-20 13:50:43.556639] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:29:46.414 [2024-11-20 13:50:43.557054] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57935 ] 00:29:46.414 [2024-11-20 13:50:43.696952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:46.673 [2024-11-20 13:50:43.753162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:46.673 [2024-11-20 13:50:43.809784] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:29:47.244 13:50:44 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:47.244 13:50:44 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:29:47.244 13:50:44 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:29:47.504 13:50:44 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57935 00:29:47.504 13:50:44 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57935 ']' 00:29:47.504 13:50:44 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57935 00:29:47.504 13:50:44 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:29:47.504 13:50:44 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:47.504 13:50:44 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57935 00:29:47.504 killing process with pid 57935 00:29:47.504 13:50:44 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:47.504 13:50:44 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:47.504 13:50:44 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57935' 00:29:47.504 13:50:44 alias_rpc -- common/autotest_common.sh@973 -- # kill 57935 00:29:47.504 13:50:44 alias_rpc -- common/autotest_common.sh@978 -- # wait 57935 00:29:48.074 ************************************ 00:29:48.074 END TEST alias_rpc 00:29:48.074 ************************************ 00:29:48.074 00:29:48.074 real 0m1.841s 00:29:48.074 user 0m2.021s 00:29:48.074 sys 0m0.447s 00:29:48.074 13:50:45 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:48.074 13:50:45 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:48.074 13:50:45 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:29:48.074 13:50:45 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:29:48.074 13:50:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:48.074 13:50:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:48.074 13:50:45 -- common/autotest_common.sh@10 -- # set +x 00:29:48.074 ************************************ 00:29:48.074 START TEST spdkcli_tcp 00:29:48.074 ************************************ 00:29:48.074 13:50:45 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:29:48.074 * Looking for test storage... 00:29:48.074 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:29:48.074 13:50:45 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:48.074 13:50:45 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:29:48.074 13:50:45 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:48.074 13:50:45 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:48.074 13:50:45 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:48.074 13:50:45 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:48.074 13:50:45 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:48.074 13:50:45 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:29:48.074 13:50:45 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:29:48.074 13:50:45 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:29:48.074 13:50:45 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:29:48.074 13:50:45 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:29:48.074 13:50:45 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:29:48.074 13:50:45 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:29:48.074 13:50:45 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:48.074 13:50:45 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:29:48.074 13:50:45 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:29:48.074 13:50:45 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:48.074 13:50:45 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:48.074 13:50:45 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:29:48.074 13:50:45 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:29:48.074 13:50:45 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:48.074 13:50:45 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:29:48.074 13:50:45 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:29:48.074 13:50:45 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:29:48.074 13:50:45 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:29:48.074 13:50:45 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:48.074 13:50:45 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:29:48.333 13:50:45 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:29:48.333 13:50:45 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:48.333 13:50:45 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:48.333 13:50:45 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:29:48.333 13:50:45 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:48.333 13:50:45 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:48.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:48.333 --rc genhtml_branch_coverage=1 00:29:48.333 --rc genhtml_function_coverage=1 00:29:48.333 --rc genhtml_legend=1 00:29:48.333 --rc geninfo_all_blocks=1 00:29:48.333 --rc geninfo_unexecuted_blocks=1 00:29:48.333 00:29:48.333 ' 00:29:48.333 13:50:45 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:48.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:48.333 --rc genhtml_branch_coverage=1 00:29:48.333 --rc genhtml_function_coverage=1 00:29:48.333 --rc genhtml_legend=1 00:29:48.333 --rc geninfo_all_blocks=1 00:29:48.333 --rc geninfo_unexecuted_blocks=1 00:29:48.333 00:29:48.333 ' 00:29:48.333 13:50:45 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:48.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:48.333 --rc genhtml_branch_coverage=1 00:29:48.333 --rc genhtml_function_coverage=1 00:29:48.333 --rc genhtml_legend=1 00:29:48.333 --rc geninfo_all_blocks=1 00:29:48.333 --rc geninfo_unexecuted_blocks=1 00:29:48.333 00:29:48.333 ' 00:29:48.333 13:50:45 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:48.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:48.333 --rc genhtml_branch_coverage=1 00:29:48.333 --rc genhtml_function_coverage=1 00:29:48.333 --rc genhtml_legend=1 00:29:48.333 --rc geninfo_all_blocks=1 00:29:48.334 --rc geninfo_unexecuted_blocks=1 00:29:48.334 00:29:48.334 ' 00:29:48.334 13:50:45 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:29:48.334 13:50:45 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:29:48.334 13:50:45 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:29:48.334 13:50:45 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:29:48.334 13:50:45 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:29:48.334 13:50:45 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:48.334 13:50:45 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:29:48.334 13:50:45 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:48.334 13:50:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:48.334 13:50:45 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58019 00:29:48.334 13:50:45 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58019 00:29:48.334 13:50:45 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:29:48.334 13:50:45 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58019 ']' 00:29:48.334 13:50:45 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:48.334 13:50:45 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:48.334 13:50:45 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:48.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:48.334 13:50:45 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:48.334 13:50:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:48.334 [2024-11-20 13:50:45.475428] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:29:48.334 [2024-11-20 13:50:45.475588] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58019 ] 00:29:48.334 [2024-11-20 13:50:45.623878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:48.592 [2024-11-20 13:50:45.684270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:48.592 [2024-11-20 13:50:45.684344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:48.592 [2024-11-20 13:50:45.754804] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:29:49.160 13:50:46 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:49.160 13:50:46 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:29:49.160 13:50:46 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58036 00:29:49.160 13:50:46 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:29:49.160 13:50:46 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:29:49.418 [ 00:29:49.418 "bdev_malloc_delete", 00:29:49.418 "bdev_malloc_create", 00:29:49.418 "bdev_null_resize", 00:29:49.418 "bdev_null_delete", 00:29:49.418 "bdev_null_create", 00:29:49.418 "bdev_nvme_cuse_unregister", 00:29:49.418 "bdev_nvme_cuse_register", 00:29:49.418 "bdev_opal_new_user", 00:29:49.418 "bdev_opal_set_lock_state", 00:29:49.418 "bdev_opal_delete", 00:29:49.418 "bdev_opal_get_info", 00:29:49.418 "bdev_opal_create", 00:29:49.419 "bdev_nvme_opal_revert", 00:29:49.419 "bdev_nvme_opal_init", 00:29:49.419 "bdev_nvme_send_cmd", 00:29:49.419 "bdev_nvme_set_keys", 00:29:49.419 "bdev_nvme_get_path_iostat", 00:29:49.419 "bdev_nvme_get_mdns_discovery_info", 00:29:49.419 "bdev_nvme_stop_mdns_discovery", 00:29:49.419 "bdev_nvme_start_mdns_discovery", 00:29:49.419 "bdev_nvme_set_multipath_policy", 00:29:49.419 "bdev_nvme_set_preferred_path", 00:29:49.419 "bdev_nvme_get_io_paths", 00:29:49.419 "bdev_nvme_remove_error_injection", 00:29:49.419 "bdev_nvme_add_error_injection", 00:29:49.419 "bdev_nvme_get_discovery_info", 00:29:49.419 "bdev_nvme_stop_discovery", 00:29:49.419 "bdev_nvme_start_discovery", 00:29:49.419 "bdev_nvme_get_controller_health_info", 00:29:49.419 "bdev_nvme_disable_controller", 00:29:49.419 "bdev_nvme_enable_controller", 00:29:49.419 "bdev_nvme_reset_controller", 00:29:49.419 "bdev_nvme_get_transport_statistics", 00:29:49.419 "bdev_nvme_apply_firmware", 00:29:49.419 "bdev_nvme_detach_controller", 00:29:49.419 "bdev_nvme_get_controllers", 00:29:49.419 "bdev_nvme_attach_controller", 00:29:49.419 "bdev_nvme_set_hotplug", 00:29:49.419 "bdev_nvme_set_options", 00:29:49.419 "bdev_passthru_delete", 00:29:49.419 "bdev_passthru_create", 00:29:49.419 "bdev_lvol_set_parent_bdev", 00:29:49.419 "bdev_lvol_set_parent", 00:29:49.419 "bdev_lvol_check_shallow_copy", 00:29:49.419 "bdev_lvol_start_shallow_copy", 00:29:49.419 "bdev_lvol_grow_lvstore", 00:29:49.419 "bdev_lvol_get_lvols", 00:29:49.419 "bdev_lvol_get_lvstores", 00:29:49.419 "bdev_lvol_delete", 00:29:49.419 "bdev_lvol_set_read_only", 00:29:49.419 "bdev_lvol_resize", 00:29:49.419 "bdev_lvol_decouple_parent", 00:29:49.419 "bdev_lvol_inflate", 00:29:49.419 "bdev_lvol_rename", 00:29:49.419 "bdev_lvol_clone_bdev", 00:29:49.419 "bdev_lvol_clone", 00:29:49.419 "bdev_lvol_snapshot", 00:29:49.419 "bdev_lvol_create", 00:29:49.419 "bdev_lvol_delete_lvstore", 00:29:49.419 "bdev_lvol_rename_lvstore", 00:29:49.419 "bdev_lvol_create_lvstore", 00:29:49.419 "bdev_raid_set_options", 00:29:49.419 "bdev_raid_remove_base_bdev", 00:29:49.419 "bdev_raid_add_base_bdev", 00:29:49.419 "bdev_raid_delete", 00:29:49.419 "bdev_raid_create", 00:29:49.419 "bdev_raid_get_bdevs", 00:29:49.419 "bdev_error_inject_error", 00:29:49.419 "bdev_error_delete", 00:29:49.419 "bdev_error_create", 00:29:49.419 "bdev_split_delete", 00:29:49.419 "bdev_split_create", 00:29:49.419 "bdev_delay_delete", 00:29:49.419 "bdev_delay_create", 00:29:49.419 "bdev_delay_update_latency", 00:29:49.419 "bdev_zone_block_delete", 00:29:49.419 "bdev_zone_block_create", 00:29:49.419 "blobfs_create", 00:29:49.419 "blobfs_detect", 00:29:49.419 "blobfs_set_cache_size", 00:29:49.419 "bdev_aio_delete", 00:29:49.419 "bdev_aio_rescan", 00:29:49.419 "bdev_aio_create", 00:29:49.419 "bdev_ftl_set_property", 00:29:49.419 "bdev_ftl_get_properties", 00:29:49.419 "bdev_ftl_get_stats", 00:29:49.419 "bdev_ftl_unmap", 00:29:49.419 "bdev_ftl_unload", 00:29:49.419 "bdev_ftl_delete", 00:29:49.419 "bdev_ftl_load", 00:29:49.419 "bdev_ftl_create", 00:29:49.419 "bdev_virtio_attach_controller", 00:29:49.419 "bdev_virtio_scsi_get_devices", 00:29:49.419 "bdev_virtio_detach_controller", 00:29:49.419 "bdev_virtio_blk_set_hotplug", 00:29:49.419 "bdev_iscsi_delete", 00:29:49.419 "bdev_iscsi_create", 00:29:49.419 "bdev_iscsi_set_options", 00:29:49.419 "bdev_uring_delete", 00:29:49.419 "bdev_uring_rescan", 00:29:49.419 "bdev_uring_create", 00:29:49.419 "accel_error_inject_error", 00:29:49.419 "ioat_scan_accel_module", 00:29:49.419 "dsa_scan_accel_module", 00:29:49.419 "iaa_scan_accel_module", 00:29:49.419 "keyring_file_remove_key", 00:29:49.419 "keyring_file_add_key", 00:29:49.419 "keyring_linux_set_options", 00:29:49.419 "fsdev_aio_delete", 00:29:49.419 "fsdev_aio_create", 00:29:49.419 "iscsi_get_histogram", 00:29:49.419 "iscsi_enable_histogram", 00:29:49.419 "iscsi_set_options", 00:29:49.419 "iscsi_get_auth_groups", 00:29:49.419 "iscsi_auth_group_remove_secret", 00:29:49.419 "iscsi_auth_group_add_secret", 00:29:49.419 "iscsi_delete_auth_group", 00:29:49.419 "iscsi_create_auth_group", 00:29:49.419 "iscsi_set_discovery_auth", 00:29:49.419 "iscsi_get_options", 00:29:49.419 "iscsi_target_node_request_logout", 00:29:49.419 "iscsi_target_node_set_redirect", 00:29:49.419 "iscsi_target_node_set_auth", 00:29:49.419 "iscsi_target_node_add_lun", 00:29:49.419 "iscsi_get_stats", 00:29:49.419 "iscsi_get_connections", 00:29:49.419 "iscsi_portal_group_set_auth", 00:29:49.419 "iscsi_start_portal_group", 00:29:49.419 "iscsi_delete_portal_group", 00:29:49.419 "iscsi_create_portal_group", 00:29:49.419 "iscsi_get_portal_groups", 00:29:49.419 "iscsi_delete_target_node", 00:29:49.419 "iscsi_target_node_remove_pg_ig_maps", 00:29:49.419 "iscsi_target_node_add_pg_ig_maps", 00:29:49.419 "iscsi_create_target_node", 00:29:49.419 "iscsi_get_target_nodes", 00:29:49.419 "iscsi_delete_initiator_group", 00:29:49.419 "iscsi_initiator_group_remove_initiators", 00:29:49.419 "iscsi_initiator_group_add_initiators", 00:29:49.419 "iscsi_create_initiator_group", 00:29:49.419 "iscsi_get_initiator_groups", 00:29:49.419 "nvmf_set_crdt", 00:29:49.419 "nvmf_set_config", 00:29:49.419 "nvmf_set_max_subsystems", 00:29:49.419 "nvmf_stop_mdns_prr", 00:29:49.419 "nvmf_publish_mdns_prr", 00:29:49.419 "nvmf_subsystem_get_listeners", 00:29:49.419 "nvmf_subsystem_get_qpairs", 00:29:49.419 "nvmf_subsystem_get_controllers", 00:29:49.419 "nvmf_get_stats", 00:29:49.419 "nvmf_get_transports", 00:29:49.419 "nvmf_create_transport", 00:29:49.419 "nvmf_get_targets", 00:29:49.419 "nvmf_delete_target", 00:29:49.419 "nvmf_create_target", 00:29:49.419 "nvmf_subsystem_allow_any_host", 00:29:49.419 "nvmf_subsystem_set_keys", 00:29:49.419 "nvmf_subsystem_remove_host", 00:29:49.419 "nvmf_subsystem_add_host", 00:29:49.419 "nvmf_ns_remove_host", 00:29:49.419 "nvmf_ns_add_host", 00:29:49.419 "nvmf_subsystem_remove_ns", 00:29:49.419 "nvmf_subsystem_set_ns_ana_group", 00:29:49.419 "nvmf_subsystem_add_ns", 00:29:49.419 "nvmf_subsystem_listener_set_ana_state", 00:29:49.419 "nvmf_discovery_get_referrals", 00:29:49.419 "nvmf_discovery_remove_referral", 00:29:49.419 "nvmf_discovery_add_referral", 00:29:49.419 "nvmf_subsystem_remove_listener", 00:29:49.419 "nvmf_subsystem_add_listener", 00:29:49.419 "nvmf_delete_subsystem", 00:29:49.419 "nvmf_create_subsystem", 00:29:49.419 "nvmf_get_subsystems", 00:29:49.419 "env_dpdk_get_mem_stats", 00:29:49.419 "nbd_get_disks", 00:29:49.419 "nbd_stop_disk", 00:29:49.419 "nbd_start_disk", 00:29:49.419 "ublk_recover_disk", 00:29:49.419 "ublk_get_disks", 00:29:49.419 "ublk_stop_disk", 00:29:49.419 "ublk_start_disk", 00:29:49.419 "ublk_destroy_target", 00:29:49.419 "ublk_create_target", 00:29:49.419 "virtio_blk_create_transport", 00:29:49.419 "virtio_blk_get_transports", 00:29:49.419 "vhost_controller_set_coalescing", 00:29:49.419 "vhost_get_controllers", 00:29:49.419 "vhost_delete_controller", 00:29:49.419 "vhost_create_blk_controller", 00:29:49.419 "vhost_scsi_controller_remove_target", 00:29:49.419 "vhost_scsi_controller_add_target", 00:29:49.419 "vhost_start_scsi_controller", 00:29:49.419 "vhost_create_scsi_controller", 00:29:49.420 "thread_set_cpumask", 00:29:49.420 "scheduler_set_options", 00:29:49.420 "framework_get_governor", 00:29:49.420 "framework_get_scheduler", 00:29:49.420 "framework_set_scheduler", 00:29:49.420 "framework_get_reactors", 00:29:49.420 "thread_get_io_channels", 00:29:49.420 "thread_get_pollers", 00:29:49.420 "thread_get_stats", 00:29:49.420 "framework_monitor_context_switch", 00:29:49.420 "spdk_kill_instance", 00:29:49.420 "log_enable_timestamps", 00:29:49.420 "log_get_flags", 00:29:49.420 "log_clear_flag", 00:29:49.420 "log_set_flag", 00:29:49.420 "log_get_level", 00:29:49.420 "log_set_level", 00:29:49.420 "log_get_print_level", 00:29:49.420 "log_set_print_level", 00:29:49.420 "framework_enable_cpumask_locks", 00:29:49.420 "framework_disable_cpumask_locks", 00:29:49.420 "framework_wait_init", 00:29:49.420 "framework_start_init", 00:29:49.420 "scsi_get_devices", 00:29:49.420 "bdev_get_histogram", 00:29:49.420 "bdev_enable_histogram", 00:29:49.420 "bdev_set_qos_limit", 00:29:49.420 "bdev_set_qd_sampling_period", 00:29:49.420 "bdev_get_bdevs", 00:29:49.420 "bdev_reset_iostat", 00:29:49.420 "bdev_get_iostat", 00:29:49.420 "bdev_examine", 00:29:49.420 "bdev_wait_for_examine", 00:29:49.420 "bdev_set_options", 00:29:49.420 "accel_get_stats", 00:29:49.420 "accel_set_options", 00:29:49.420 "accel_set_driver", 00:29:49.420 "accel_crypto_key_destroy", 00:29:49.420 "accel_crypto_keys_get", 00:29:49.420 "accel_crypto_key_create", 00:29:49.420 "accel_assign_opc", 00:29:49.420 "accel_get_module_info", 00:29:49.420 "accel_get_opc_assignments", 00:29:49.420 "vmd_rescan", 00:29:49.420 "vmd_remove_device", 00:29:49.420 "vmd_enable", 00:29:49.420 "sock_get_default_impl", 00:29:49.420 "sock_set_default_impl", 00:29:49.420 "sock_impl_set_options", 00:29:49.420 "sock_impl_get_options", 00:29:49.420 "iobuf_get_stats", 00:29:49.420 "iobuf_set_options", 00:29:49.420 "keyring_get_keys", 00:29:49.420 "framework_get_pci_devices", 00:29:49.420 "framework_get_config", 00:29:49.420 "framework_get_subsystems", 00:29:49.420 "fsdev_set_opts", 00:29:49.420 "fsdev_get_opts", 00:29:49.420 "trace_get_info", 00:29:49.420 "trace_get_tpoint_group_mask", 00:29:49.420 "trace_disable_tpoint_group", 00:29:49.420 "trace_enable_tpoint_group", 00:29:49.420 "trace_clear_tpoint_mask", 00:29:49.420 "trace_set_tpoint_mask", 00:29:49.420 "notify_get_notifications", 00:29:49.420 "notify_get_types", 00:29:49.420 "spdk_get_version", 00:29:49.420 "rpc_get_methods" 00:29:49.420 ] 00:29:49.420 13:50:46 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:29:49.420 13:50:46 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:49.420 13:50:46 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:49.420 13:50:46 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:29:49.420 13:50:46 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58019 00:29:49.420 13:50:46 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58019 ']' 00:29:49.420 13:50:46 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58019 00:29:49.678 13:50:46 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:29:49.678 13:50:46 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:49.678 13:50:46 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58019 00:29:49.678 13:50:46 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:49.678 13:50:46 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:49.678 13:50:46 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58019' 00:29:49.678 killing process with pid 58019 00:29:49.678 13:50:46 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58019 00:29:49.678 13:50:46 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58019 00:29:49.936 00:29:49.936 real 0m1.947s 00:29:49.936 user 0m3.499s 00:29:49.936 sys 0m0.559s 00:29:49.937 13:50:47 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:49.937 13:50:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:49.937 ************************************ 00:29:49.937 END TEST spdkcli_tcp 00:29:49.937 ************************************ 00:29:49.937 13:50:47 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:29:49.937 13:50:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:49.937 13:50:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:49.937 13:50:47 -- common/autotest_common.sh@10 -- # set +x 00:29:49.937 ************************************ 00:29:49.937 START TEST dpdk_mem_utility 00:29:49.937 ************************************ 00:29:49.937 13:50:47 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:29:50.252 * Looking for test storage... 00:29:50.252 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:29:50.252 13:50:47 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:50.252 13:50:47 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:29:50.252 13:50:47 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:50.252 13:50:47 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:50.252 13:50:47 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:50.252 13:50:47 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:50.252 13:50:47 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:50.252 13:50:47 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:29:50.252 13:50:47 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:29:50.252 13:50:47 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:29:50.252 13:50:47 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:29:50.252 13:50:47 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:29:50.252 13:50:47 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:29:50.252 13:50:47 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:29:50.252 13:50:47 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:50.252 13:50:47 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:29:50.252 13:50:47 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:29:50.252 13:50:47 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:50.252 13:50:47 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:50.252 13:50:47 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:29:50.252 13:50:47 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:29:50.252 13:50:47 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:50.252 13:50:47 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:29:50.252 13:50:47 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:29:50.252 13:50:47 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:29:50.252 13:50:47 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:29:50.252 13:50:47 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:50.252 13:50:47 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:29:50.252 13:50:47 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:29:50.252 13:50:47 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:50.252 13:50:47 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:50.252 13:50:47 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:29:50.252 13:50:47 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:50.252 13:50:47 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:50.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.252 --rc genhtml_branch_coverage=1 00:29:50.252 --rc genhtml_function_coverage=1 00:29:50.252 --rc genhtml_legend=1 00:29:50.252 --rc geninfo_all_blocks=1 00:29:50.252 --rc geninfo_unexecuted_blocks=1 00:29:50.252 00:29:50.252 ' 00:29:50.252 13:50:47 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:50.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.252 --rc genhtml_branch_coverage=1 00:29:50.252 --rc genhtml_function_coverage=1 00:29:50.252 --rc genhtml_legend=1 00:29:50.252 --rc geninfo_all_blocks=1 00:29:50.252 --rc geninfo_unexecuted_blocks=1 00:29:50.252 00:29:50.252 ' 00:29:50.252 13:50:47 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:50.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.252 --rc genhtml_branch_coverage=1 00:29:50.252 --rc genhtml_function_coverage=1 00:29:50.252 --rc genhtml_legend=1 00:29:50.252 --rc geninfo_all_blocks=1 00:29:50.252 --rc geninfo_unexecuted_blocks=1 00:29:50.252 00:29:50.252 ' 00:29:50.252 13:50:47 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:50.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.252 --rc genhtml_branch_coverage=1 00:29:50.252 --rc genhtml_function_coverage=1 00:29:50.252 --rc genhtml_legend=1 00:29:50.252 --rc geninfo_all_blocks=1 00:29:50.252 --rc geninfo_unexecuted_blocks=1 00:29:50.252 00:29:50.252 ' 00:29:50.252 13:50:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:29:50.252 13:50:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58118 00:29:50.252 13:50:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:50.252 13:50:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58118 00:29:50.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:50.252 13:50:47 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58118 ']' 00:29:50.252 13:50:47 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:50.252 13:50:47 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:50.252 13:50:47 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:50.252 13:50:47 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:50.252 13:50:47 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:29:50.252 [2024-11-20 13:50:47.475896] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:29:50.252 [2024-11-20 13:50:47.475972] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58118 ] 00:29:50.510 [2024-11-20 13:50:47.626072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:50.510 [2024-11-20 13:50:47.680689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:50.510 [2024-11-20 13:50:47.739102] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:29:50.772 13:50:47 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:50.772 13:50:47 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:29:50.772 13:50:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:29:50.772 13:50:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:29:50.772 13:50:47 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.772 13:50:47 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:29:50.772 { 00:29:50.772 "filename": "/tmp/spdk_mem_dump.txt" 00:29:50.772 } 00:29:50.772 13:50:47 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.772 13:50:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:29:50.772 DPDK memory size 818.000000 MiB in 1 heap(s) 00:29:50.772 1 heaps totaling size 818.000000 MiB 00:29:50.772 size: 818.000000 MiB heap id: 0 00:29:50.772 end heaps---------- 00:29:50.772 9 mempools totaling size 603.782043 MiB 00:29:50.772 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:29:50.772 size: 158.602051 MiB name: PDU_data_out_Pool 00:29:50.772 size: 100.555481 MiB name: bdev_io_58118 00:29:50.772 size: 50.003479 MiB name: msgpool_58118 00:29:50.772 size: 36.509338 MiB name: fsdev_io_58118 00:29:50.772 size: 21.763794 MiB name: PDU_Pool 00:29:50.772 size: 19.513306 MiB name: SCSI_TASK_Pool 00:29:50.772 size: 4.133484 MiB name: evtpool_58118 00:29:50.772 size: 0.026123 MiB name: Session_Pool 00:29:50.772 end mempools------- 00:29:50.772 6 memzones totaling size 4.142822 MiB 00:29:50.772 size: 1.000366 MiB name: RG_ring_0_58118 00:29:50.772 size: 1.000366 MiB name: RG_ring_1_58118 00:29:50.772 size: 1.000366 MiB name: RG_ring_4_58118 00:29:50.772 size: 1.000366 MiB name: RG_ring_5_58118 00:29:50.772 size: 0.125366 MiB name: RG_ring_2_58118 00:29:50.772 size: 0.015991 MiB name: RG_ring_3_58118 00:29:50.772 end memzones------- 00:29:50.772 13:50:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:29:50.772 heap id: 0 total size: 818.000000 MiB number of busy elements: 313 number of free elements: 15 00:29:50.772 list of free elements. size: 10.803223 MiB 00:29:50.772 element at address: 0x200019200000 with size: 0.999878 MiB 00:29:50.772 element at address: 0x200019400000 with size: 0.999878 MiB 00:29:50.772 element at address: 0x200032000000 with size: 0.994446 MiB 00:29:50.772 element at address: 0x200000400000 with size: 0.993958 MiB 00:29:50.772 element at address: 0x200006400000 with size: 0.959839 MiB 00:29:50.772 element at address: 0x200012c00000 with size: 0.944275 MiB 00:29:50.772 element at address: 0x200019600000 with size: 0.936584 MiB 00:29:50.772 element at address: 0x200000200000 with size: 0.717346 MiB 00:29:50.772 element at address: 0x20001ae00000 with size: 0.568420 MiB 00:29:50.772 element at address: 0x20000a600000 with size: 0.488892 MiB 00:29:50.772 element at address: 0x200000c00000 with size: 0.486267 MiB 00:29:50.772 element at address: 0x200019800000 with size: 0.485657 MiB 00:29:50.772 element at address: 0x200003e00000 with size: 0.480286 MiB 00:29:50.772 element at address: 0x200028200000 with size: 0.395752 MiB 00:29:50.772 element at address: 0x200000800000 with size: 0.351746 MiB 00:29:50.772 list of standard malloc elements. size: 199.267883 MiB 00:29:50.772 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:29:50.772 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:29:50.772 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:29:50.772 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:29:50.772 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:29:50.772 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:29:50.772 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:29:50.772 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:29:50.772 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:29:50.772 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:29:50.772 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:29:50.772 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:29:50.772 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:29:50.772 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:29:50.772 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:29:50.772 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:29:50.772 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:29:50.772 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:29:50.772 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:29:50.772 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:29:50.772 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:29:50.772 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:29:50.772 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:29:50.772 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:29:50.772 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:29:50.772 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:29:50.772 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:29:50.772 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:29:50.772 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:29:50.772 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:29:50.772 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:29:50.772 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:29:50.772 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:29:50.772 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:29:50.772 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:29:50.772 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:29:50.772 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:29:50.772 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:29:50.772 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:29:50.772 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:29:50.772 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:29:50.772 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:29:50.772 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:29:50.772 element at address: 0x20000085e580 with size: 0.000183 MiB 00:29:50.772 element at address: 0x20000087e840 with size: 0.000183 MiB 00:29:50.772 element at address: 0x20000087e900 with size: 0.000183 MiB 00:29:50.772 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:29:50.772 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:29:50.772 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:29:50.772 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:29:50.772 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20000087f080 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20000087f140 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20000087f200 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20000087f380 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20000087f440 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20000087f500 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20000087f680 with size: 0.000183 MiB 00:29:50.773 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:29:50.773 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:29:50.773 element at address: 0x200000c7c7c0 with size: 0.000183 MiB 00:29:50.773 element at address: 0x200000c7c880 with size: 0.000183 MiB 00:29:50.773 element at address: 0x200000c7c940 with size: 0.000183 MiB 00:29:50.773 element at address: 0x200000c7ca00 with size: 0.000183 MiB 00:29:50.773 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:29:50.773 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:29:50.773 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:29:50.773 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:29:50.773 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:29:50.773 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:29:50.773 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:29:50.773 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:29:50.773 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:29:50.773 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:29:50.773 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:29:50.773 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:29:50.773 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:29:50.773 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:29:50.773 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:29:50.773 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:29:50.773 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:29:50.773 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:29:50.773 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:29:50.773 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:29:50.773 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:29:50.773 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:29:50.773 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:29:50.773 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:29:50.773 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:29:50.773 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:29:50.773 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:29:50.773 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:29:50.773 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:29:50.773 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:29:50.773 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:29:50.773 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:29:50.773 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:29:50.773 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:29:50.773 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:29:50.773 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:29:50.773 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:29:50.773 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:29:50.773 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:29:50.773 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:29:50.773 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:29:50.773 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:29:50.773 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:29:50.773 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:29:50.773 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:29:50.773 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:29:50.773 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:29:50.773 element at address: 0x200000cff000 with size: 0.000183 MiB 00:29:50.773 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:29:50.773 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:29:50.773 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:29:50.773 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:29:50.773 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:29:50.773 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:29:50.773 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:29:50.773 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:29:50.773 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:29:50.773 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:29:50.773 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:29:50.773 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:29:50.773 element at address: 0x200003efb980 with size: 0.000183 MiB 00:29:50.773 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:29:50.773 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:29:50.773 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:29:50.773 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:29:50.773 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20001ae91840 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20001ae91900 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20001ae919c0 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20001ae91a80 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20001ae91b40 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20001ae91c00 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20001ae91cc0 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20001ae91d80 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20001ae91e40 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20001ae91f00 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20001ae91fc0 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20001ae92080 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20001ae92140 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20001ae92200 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20001ae922c0 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20001ae92380 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20001ae92440 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20001ae92500 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20001ae925c0 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20001ae92680 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20001ae92740 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20001ae92800 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20001ae928c0 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20001ae92980 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20001ae92a40 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20001ae92b00 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20001ae92bc0 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:29:50.773 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:29:50.774 element at address: 0x200028265500 with size: 0.000183 MiB 00:29:50.774 element at address: 0x2000282655c0 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826c1c0 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826c3c0 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826c480 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826c540 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826c600 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826c6c0 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826c780 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826c840 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826c900 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826c9c0 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826ca80 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826cb40 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826cc00 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826ccc0 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826cd80 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826ce40 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826cf00 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826cfc0 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826d080 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826d140 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826d200 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826d380 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826d440 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826d500 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826d680 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826d740 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826d800 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826d980 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826da40 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826db00 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826de00 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826df80 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826e040 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826e100 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826e280 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826e340 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826e400 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826e580 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826e640 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826e700 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826e880 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826e940 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826f000 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826f180 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826f240 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826f300 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826f480 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826f540 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826f600 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826f780 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826f840 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826f900 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:29:50.774 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:29:50.774 list of memzone associated elements. size: 607.928894 MiB 00:29:50.774 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:29:50.774 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:29:50.774 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:29:50.774 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:29:50.774 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:29:50.774 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58118_0 00:29:50.774 element at address: 0x200000dff380 with size: 48.003052 MiB 00:29:50.774 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58118_0 00:29:50.774 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:29:50.774 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58118_0 00:29:50.774 element at address: 0x2000199be940 with size: 20.255554 MiB 00:29:50.774 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:29:50.774 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:29:50.774 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:29:50.774 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:29:50.774 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58118_0 00:29:50.774 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:29:50.774 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58118 00:29:50.774 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:29:50.774 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58118 00:29:50.774 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:29:50.774 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:29:50.774 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:29:50.774 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:29:50.774 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:29:50.774 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:29:50.774 element at address: 0x200003efba40 with size: 1.008118 MiB 00:29:50.774 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:29:50.774 element at address: 0x200000cff180 with size: 1.000488 MiB 00:29:50.774 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58118 00:29:50.774 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:29:50.774 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58118 00:29:50.774 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:29:50.774 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58118 00:29:50.774 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:29:50.774 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58118 00:29:50.774 element at address: 0x20000087f740 with size: 0.500488 MiB 00:29:50.774 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58118 00:29:50.775 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:29:50.775 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58118 00:29:50.775 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:29:50.775 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:29:50.775 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:29:50.775 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:29:50.775 element at address: 0x20001987c540 with size: 0.250488 MiB 00:29:50.775 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:29:50.775 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:29:50.775 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58118 00:29:50.775 element at address: 0x20000085e640 with size: 0.125488 MiB 00:29:50.775 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58118 00:29:50.775 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:29:50.775 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:29:50.775 element at address: 0x200028265680 with size: 0.023743 MiB 00:29:50.775 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:29:50.775 element at address: 0x20000085a380 with size: 0.016113 MiB 00:29:50.775 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58118 00:29:50.775 element at address: 0x20002826b7c0 with size: 0.002441 MiB 00:29:50.775 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:29:50.775 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:29:50.775 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58118 00:29:50.775 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:29:50.775 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58118 00:29:50.775 element at address: 0x20000085a180 with size: 0.000305 MiB 00:29:50.775 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58118 00:29:50.775 element at address: 0x20002826c280 with size: 0.000305 MiB 00:29:50.775 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:29:50.775 13:50:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:29:50.775 13:50:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58118 00:29:50.775 13:50:48 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58118 ']' 00:29:50.775 13:50:48 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58118 00:29:50.775 13:50:48 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:29:50.775 13:50:48 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:50.775 13:50:48 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58118 00:29:51.075 killing process with pid 58118 00:29:51.075 13:50:48 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:51.075 13:50:48 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:51.075 13:50:48 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58118' 00:29:51.075 13:50:48 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58118 00:29:51.075 13:50:48 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58118 00:29:51.333 00:29:51.333 real 0m1.255s 00:29:51.333 user 0m1.214s 00:29:51.333 sys 0m0.431s 00:29:51.333 ************************************ 00:29:51.333 END TEST dpdk_mem_utility 00:29:51.333 ************************************ 00:29:51.333 13:50:48 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:51.333 13:50:48 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:29:51.333 13:50:48 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:29:51.333 13:50:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:51.333 13:50:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:51.333 13:50:48 -- common/autotest_common.sh@10 -- # set +x 00:29:51.333 ************************************ 00:29:51.333 START TEST event 00:29:51.333 ************************************ 00:29:51.333 13:50:48 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:29:51.333 * Looking for test storage... 00:29:51.333 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:29:51.333 13:50:48 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:51.333 13:50:48 event -- common/autotest_common.sh@1693 -- # lcov --version 00:29:51.333 13:50:48 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:51.591 13:50:48 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:51.591 13:50:48 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:51.591 13:50:48 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:51.591 13:50:48 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:51.591 13:50:48 event -- scripts/common.sh@336 -- # IFS=.-: 00:29:51.591 13:50:48 event -- scripts/common.sh@336 -- # read -ra ver1 00:29:51.591 13:50:48 event -- scripts/common.sh@337 -- # IFS=.-: 00:29:51.591 13:50:48 event -- scripts/common.sh@337 -- # read -ra ver2 00:29:51.591 13:50:48 event -- scripts/common.sh@338 -- # local 'op=<' 00:29:51.591 13:50:48 event -- scripts/common.sh@340 -- # ver1_l=2 00:29:51.591 13:50:48 event -- scripts/common.sh@341 -- # ver2_l=1 00:29:51.591 13:50:48 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:51.591 13:50:48 event -- scripts/common.sh@344 -- # case "$op" in 00:29:51.591 13:50:48 event -- scripts/common.sh@345 -- # : 1 00:29:51.591 13:50:48 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:51.591 13:50:48 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:51.591 13:50:48 event -- scripts/common.sh@365 -- # decimal 1 00:29:51.591 13:50:48 event -- scripts/common.sh@353 -- # local d=1 00:29:51.591 13:50:48 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:51.591 13:50:48 event -- scripts/common.sh@355 -- # echo 1 00:29:51.591 13:50:48 event -- scripts/common.sh@365 -- # ver1[v]=1 00:29:51.591 13:50:48 event -- scripts/common.sh@366 -- # decimal 2 00:29:51.591 13:50:48 event -- scripts/common.sh@353 -- # local d=2 00:29:51.591 13:50:48 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:51.591 13:50:48 event -- scripts/common.sh@355 -- # echo 2 00:29:51.591 13:50:48 event -- scripts/common.sh@366 -- # ver2[v]=2 00:29:51.591 13:50:48 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:51.591 13:50:48 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:51.591 13:50:48 event -- scripts/common.sh@368 -- # return 0 00:29:51.591 13:50:48 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:51.591 13:50:48 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:51.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.591 --rc genhtml_branch_coverage=1 00:29:51.591 --rc genhtml_function_coverage=1 00:29:51.591 --rc genhtml_legend=1 00:29:51.591 --rc geninfo_all_blocks=1 00:29:51.591 --rc geninfo_unexecuted_blocks=1 00:29:51.591 00:29:51.591 ' 00:29:51.591 13:50:48 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:51.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.591 --rc genhtml_branch_coverage=1 00:29:51.591 --rc genhtml_function_coverage=1 00:29:51.591 --rc genhtml_legend=1 00:29:51.591 --rc geninfo_all_blocks=1 00:29:51.591 --rc geninfo_unexecuted_blocks=1 00:29:51.591 00:29:51.591 ' 00:29:51.591 13:50:48 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:51.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.591 --rc genhtml_branch_coverage=1 00:29:51.591 --rc genhtml_function_coverage=1 00:29:51.591 --rc genhtml_legend=1 00:29:51.591 --rc geninfo_all_blocks=1 00:29:51.591 --rc geninfo_unexecuted_blocks=1 00:29:51.591 00:29:51.591 ' 00:29:51.591 13:50:48 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:51.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.591 --rc genhtml_branch_coverage=1 00:29:51.591 --rc genhtml_function_coverage=1 00:29:51.591 --rc genhtml_legend=1 00:29:51.591 --rc geninfo_all_blocks=1 00:29:51.591 --rc geninfo_unexecuted_blocks=1 00:29:51.591 00:29:51.591 ' 00:29:51.591 13:50:48 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:29:51.591 13:50:48 event -- bdev/nbd_common.sh@6 -- # set -e 00:29:51.591 13:50:48 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:29:51.591 13:50:48 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:29:51.591 13:50:48 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:51.591 13:50:48 event -- common/autotest_common.sh@10 -- # set +x 00:29:51.591 ************************************ 00:29:51.591 START TEST event_perf 00:29:51.591 ************************************ 00:29:51.591 13:50:48 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:29:51.591 Running I/O for 1 seconds...[2024-11-20 13:50:48.774833] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:29:51.591 [2024-11-20 13:50:48.775060] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58190 ] 00:29:51.849 [2024-11-20 13:50:48.928751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:51.849 [2024-11-20 13:50:48.988178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:51.849 [2024-11-20 13:50:48.988290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:51.849 [2024-11-20 13:50:48.988355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:51.849 [2024-11-20 13:50:48.988358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:52.783 Running I/O for 1 seconds... 00:29:52.783 lcore 0: 95472 00:29:52.783 lcore 1: 95469 00:29:52.783 lcore 2: 95471 00:29:52.783 lcore 3: 95469 00:29:52.783 done. 00:29:52.783 00:29:52.783 real 0m1.287s 00:29:52.783 user 0m4.110s 00:29:52.783 sys 0m0.052s 00:29:52.783 13:50:50 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:52.783 13:50:50 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:29:52.783 ************************************ 00:29:52.783 END TEST event_perf 00:29:52.783 ************************************ 00:29:52.783 13:50:50 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:29:52.783 13:50:50 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:52.783 13:50:50 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:52.783 13:50:50 event -- common/autotest_common.sh@10 -- # set +x 00:29:53.041 ************************************ 00:29:53.041 START TEST event_reactor 00:29:53.041 ************************************ 00:29:53.041 13:50:50 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:29:53.041 [2024-11-20 13:50:50.135845] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:29:53.041 [2024-11-20 13:50:50.136054] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58223 ] 00:29:53.041 [2024-11-20 13:50:50.289875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:53.299 [2024-11-20 13:50:50.373670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:54.236 test_start 00:29:54.236 oneshot 00:29:54.236 tick 100 00:29:54.236 tick 100 00:29:54.236 tick 250 00:29:54.236 tick 100 00:29:54.236 tick 100 00:29:54.236 tick 250 00:29:54.236 tick 500 00:29:54.236 tick 100 00:29:54.236 tick 100 00:29:54.236 tick 100 00:29:54.236 tick 250 00:29:54.236 tick 100 00:29:54.236 tick 100 00:29:54.236 test_end 00:29:54.236 ************************************ 00:29:54.236 END TEST event_reactor 00:29:54.236 ************************************ 00:29:54.236 00:29:54.236 real 0m1.336s 00:29:54.236 user 0m1.167s 00:29:54.236 sys 0m0.062s 00:29:54.236 13:50:51 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:54.236 13:50:51 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:29:54.236 13:50:51 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:29:54.236 13:50:51 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:54.236 13:50:51 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:54.236 13:50:51 event -- common/autotest_common.sh@10 -- # set +x 00:29:54.236 ************************************ 00:29:54.236 START TEST event_reactor_perf 00:29:54.236 ************************************ 00:29:54.237 13:50:51 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:29:54.237 [2024-11-20 13:50:51.538893] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:29:54.237 [2024-11-20 13:50:51.538978] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58259 ] 00:29:54.496 [2024-11-20 13:50:51.691805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:54.496 [2024-11-20 13:50:51.744058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:55.876 test_start 00:29:55.876 test_end 00:29:55.876 Performance: 444639 events per second 00:29:55.876 ************************************ 00:29:55.876 END TEST event_reactor_perf 00:29:55.876 ************************************ 00:29:55.876 00:29:55.876 real 0m1.276s 00:29:55.876 user 0m1.125s 00:29:55.876 sys 0m0.045s 00:29:55.876 13:50:52 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:55.876 13:50:52 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:29:55.876 13:50:52 event -- event/event.sh@49 -- # uname -s 00:29:55.876 13:50:52 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:29:55.876 13:50:52 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:29:55.876 13:50:52 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:55.876 13:50:52 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:55.876 13:50:52 event -- common/autotest_common.sh@10 -- # set +x 00:29:55.876 ************************************ 00:29:55.876 START TEST event_scheduler 00:29:55.876 ************************************ 00:29:55.876 13:50:52 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:29:55.876 * Looking for test storage... 00:29:55.876 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:29:55.876 13:50:52 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:55.876 13:50:52 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:29:55.876 13:50:52 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:55.876 13:50:53 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:55.876 13:50:53 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:55.876 13:50:53 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:55.876 13:50:53 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:55.876 13:50:53 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:29:55.876 13:50:53 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:29:55.876 13:50:53 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:29:55.876 13:50:53 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:29:55.876 13:50:53 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:29:55.876 13:50:53 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:29:55.876 13:50:53 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:29:55.876 13:50:53 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:55.876 13:50:53 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:29:55.876 13:50:53 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:29:55.876 13:50:53 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:55.876 13:50:53 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:55.876 13:50:53 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:29:55.876 13:50:53 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:29:55.876 13:50:53 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:55.876 13:50:53 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:29:55.876 13:50:53 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:29:55.876 13:50:53 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:29:55.876 13:50:53 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:29:55.876 13:50:53 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:55.876 13:50:53 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:29:55.876 13:50:53 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:29:55.876 13:50:53 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:55.876 13:50:53 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:55.876 13:50:53 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:29:55.876 13:50:53 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:55.876 13:50:53 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:55.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:55.876 --rc genhtml_branch_coverage=1 00:29:55.876 --rc genhtml_function_coverage=1 00:29:55.876 --rc genhtml_legend=1 00:29:55.876 --rc geninfo_all_blocks=1 00:29:55.876 --rc geninfo_unexecuted_blocks=1 00:29:55.876 00:29:55.876 ' 00:29:55.876 13:50:53 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:55.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:55.876 --rc genhtml_branch_coverage=1 00:29:55.876 --rc genhtml_function_coverage=1 00:29:55.876 --rc genhtml_legend=1 00:29:55.876 --rc geninfo_all_blocks=1 00:29:55.877 --rc geninfo_unexecuted_blocks=1 00:29:55.877 00:29:55.877 ' 00:29:55.877 13:50:53 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:55.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:55.877 --rc genhtml_branch_coverage=1 00:29:55.877 --rc genhtml_function_coverage=1 00:29:55.877 --rc genhtml_legend=1 00:29:55.877 --rc geninfo_all_blocks=1 00:29:55.877 --rc geninfo_unexecuted_blocks=1 00:29:55.877 00:29:55.877 ' 00:29:55.877 13:50:53 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:55.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:55.877 --rc genhtml_branch_coverage=1 00:29:55.877 --rc genhtml_function_coverage=1 00:29:55.877 --rc genhtml_legend=1 00:29:55.877 --rc geninfo_all_blocks=1 00:29:55.877 --rc geninfo_unexecuted_blocks=1 00:29:55.877 00:29:55.877 ' 00:29:55.877 13:50:53 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:29:55.877 13:50:53 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58328 00:29:55.877 13:50:53 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:29:55.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:55.877 13:50:53 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:29:55.877 13:50:53 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58328 00:29:55.877 13:50:53 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58328 ']' 00:29:55.877 13:50:53 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:55.877 13:50:53 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:55.877 13:50:53 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:55.877 13:50:53 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:55.877 13:50:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:29:55.877 [2024-11-20 13:50:53.154508] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:29:55.877 [2024-11-20 13:50:53.154701] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58328 ] 00:29:56.138 [2024-11-20 13:50:53.307835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:56.138 [2024-11-20 13:50:53.368977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:56.138 [2024-11-20 13:50:53.369178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:56.138 [2024-11-20 13:50:53.369258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:56.138 [2024-11-20 13:50:53.369262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:57.073 13:50:54 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:57.073 13:50:54 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:29:57.073 13:50:54 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:29:57.073 13:50:54 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.073 13:50:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:29:57.073 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:29:57.073 POWER: Cannot set governor of lcore 0 to userspace 00:29:57.073 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:29:57.073 POWER: Cannot set governor of lcore 0 to performance 00:29:57.073 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:29:57.073 POWER: Cannot set governor of lcore 0 to userspace 00:29:57.073 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:29:57.073 POWER: Cannot set governor of lcore 0 to userspace 00:29:57.073 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:29:57.073 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:29:57.073 POWER: Unable to set Power Management Environment for lcore 0 00:29:57.073 [2024-11-20 13:50:54.078218] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:29:57.073 [2024-11-20 13:50:54.078273] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:29:57.073 [2024-11-20 13:50:54.078301] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:29:57.073 [2024-11-20 13:50:54.078335] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:29:57.073 [2024-11-20 13:50:54.078364] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:29:57.073 [2024-11-20 13:50:54.078393] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:29:57.073 13:50:54 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.073 13:50:54 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:29:57.073 13:50:54 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.073 13:50:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:29:57.073 [2024-11-20 13:50:54.129608] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:29:57.073 [2024-11-20 13:50:54.160698] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:29:57.073 13:50:54 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.073 13:50:54 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:29:57.074 13:50:54 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:57.074 13:50:54 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:57.074 13:50:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:29:57.074 ************************************ 00:29:57.074 START TEST scheduler_create_thread 00:29:57.074 ************************************ 00:29:57.074 13:50:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:29:57.074 13:50:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:29:57.074 13:50:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.074 13:50:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:29:57.074 2 00:29:57.074 13:50:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.074 13:50:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:29:57.074 13:50:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.074 13:50:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:29:57.074 3 00:29:57.074 13:50:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.074 13:50:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:29:57.074 13:50:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.074 13:50:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:29:57.074 4 00:29:57.074 13:50:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.074 13:50:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:29:57.074 13:50:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.074 13:50:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:29:57.074 5 00:29:57.074 13:50:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.074 13:50:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:29:57.074 13:50:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.074 13:50:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:29:57.074 6 00:29:57.074 13:50:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.074 13:50:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:29:57.074 13:50:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.074 13:50:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:29:57.074 7 00:29:57.074 13:50:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.074 13:50:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:29:57.074 13:50:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.074 13:50:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:29:57.074 8 00:29:57.074 13:50:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.074 13:50:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:29:57.074 13:50:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.074 13:50:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:29:57.074 9 00:29:57.074 13:50:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.074 13:50:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:29:57.074 13:50:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.074 13:50:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:29:57.643 10 00:29:57.643 13:50:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.643 13:50:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:29:57.643 13:50:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.643 13:50:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:29:59.026 13:50:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.026 13:50:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:29:59.026 13:50:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:29:59.026 13:50:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.026 13:50:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:29:59.595 13:50:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.595 13:50:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:29:59.596 13:50:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.596 13:50:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:30:00.533 13:50:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.533 13:50:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:30:00.533 13:50:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:30:00.533 13:50:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.533 13:50:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:30:01.101 13:50:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.101 00:30:01.101 real 0m4.209s 00:30:01.101 user 0m0.028s 00:30:01.101 sys 0m0.010s 00:30:01.101 13:50:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:01.101 13:50:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:30:01.101 ************************************ 00:30:01.101 END TEST scheduler_create_thread 00:30:01.101 ************************************ 00:30:01.360 13:50:58 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:30:01.360 13:50:58 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58328 00:30:01.360 13:50:58 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58328 ']' 00:30:01.360 13:50:58 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58328 00:30:01.360 13:50:58 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:30:01.360 13:50:58 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:01.360 13:50:58 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58328 00:30:01.360 killing process with pid 58328 00:30:01.360 13:50:58 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:30:01.360 13:50:58 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:30:01.360 13:50:58 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58328' 00:30:01.360 13:50:58 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58328 00:30:01.360 13:50:58 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58328 00:30:01.360 [2024-11-20 13:50:58.661017] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:30:01.619 00:30:01.619 real 0m6.040s 00:30:01.619 user 0m13.266s 00:30:01.619 sys 0m0.441s 00:30:01.619 13:50:58 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:01.619 13:50:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:30:01.619 ************************************ 00:30:01.619 END TEST event_scheduler 00:30:01.619 ************************************ 00:30:01.878 13:50:58 event -- event/event.sh@51 -- # modprobe -n nbd 00:30:01.878 13:50:58 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:30:01.878 13:50:58 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:01.878 13:50:58 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:01.878 13:50:58 event -- common/autotest_common.sh@10 -- # set +x 00:30:01.878 ************************************ 00:30:01.878 START TEST app_repeat 00:30:01.878 ************************************ 00:30:01.878 13:50:58 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:30:01.878 13:50:58 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:01.878 13:50:58 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:01.878 13:50:58 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:30:01.878 13:50:58 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:30:01.878 13:50:58 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:30:01.878 13:50:58 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:30:01.878 13:50:58 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:30:01.878 13:50:58 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58444 00:30:01.878 13:50:58 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:30:01.878 13:50:58 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:30:01.878 13:50:58 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58444' 00:30:01.878 Process app_repeat pid: 58444 00:30:01.878 13:50:58 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:30:01.878 13:50:58 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:30:01.878 spdk_app_start Round 0 00:30:01.878 13:50:58 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58444 /var/tmp/spdk-nbd.sock 00:30:01.878 13:50:58 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58444 ']' 00:30:01.878 13:50:58 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:30:01.878 13:50:58 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:01.878 13:50:58 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:30:01.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:30:01.878 13:50:58 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:01.878 13:50:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:30:01.878 [2024-11-20 13:50:59.012476] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:30:01.878 [2024-11-20 13:50:59.012580] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58444 ] 00:30:01.878 [2024-11-20 13:50:59.163111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:02.149 [2024-11-20 13:50:59.245792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:02.149 [2024-11-20 13:50:59.245797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:02.149 [2024-11-20 13:50:59.323026] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:30:02.717 13:50:59 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:02.717 13:50:59 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:30:02.717 13:50:59 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:30:02.975 Malloc0 00:30:02.975 13:51:00 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:30:03.235 Malloc1 00:30:03.235 13:51:00 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:30:03.235 13:51:00 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:03.494 13:51:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:30:03.494 13:51:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:30:03.494 13:51:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:03.494 13:51:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:30:03.494 13:51:00 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:30:03.494 13:51:00 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:03.494 13:51:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:30:03.494 13:51:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:03.494 13:51:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:03.494 13:51:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:03.494 13:51:00 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:30:03.494 13:51:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:03.494 13:51:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:03.494 13:51:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:30:03.494 /dev/nbd0 00:30:03.753 13:51:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:03.753 13:51:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:03.753 13:51:00 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:30:03.753 13:51:00 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:30:03.753 13:51:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:03.753 13:51:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:03.753 13:51:00 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:30:03.753 13:51:00 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:30:03.753 13:51:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:03.753 13:51:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:03.753 13:51:00 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:30:03.753 1+0 records in 00:30:03.753 1+0 records out 00:30:03.753 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000383004 s, 10.7 MB/s 00:30:03.753 13:51:00 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:30:03.753 13:51:00 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:30:03.753 13:51:00 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:30:03.753 13:51:00 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:03.753 13:51:00 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:30:03.753 13:51:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:03.753 13:51:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:03.753 13:51:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:30:04.013 /dev/nbd1 00:30:04.013 13:51:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:04.013 13:51:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:04.013 13:51:01 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:30:04.013 13:51:01 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:30:04.013 13:51:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:04.013 13:51:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:04.013 13:51:01 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:30:04.013 13:51:01 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:30:04.013 13:51:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:04.013 13:51:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:04.013 13:51:01 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:30:04.013 1+0 records in 00:30:04.013 1+0 records out 00:30:04.013 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00024685 s, 16.6 MB/s 00:30:04.013 13:51:01 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:30:04.013 13:51:01 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:30:04.013 13:51:01 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:30:04.013 13:51:01 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:04.013 13:51:01 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:30:04.013 13:51:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:04.013 13:51:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:04.013 13:51:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:30:04.013 13:51:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:04.013 13:51:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:04.397 13:51:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:30:04.397 { 00:30:04.397 "nbd_device": "/dev/nbd0", 00:30:04.397 "bdev_name": "Malloc0" 00:30:04.397 }, 00:30:04.397 { 00:30:04.397 "nbd_device": "/dev/nbd1", 00:30:04.397 "bdev_name": "Malloc1" 00:30:04.397 } 00:30:04.397 ]' 00:30:04.397 13:51:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:30:04.397 13:51:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:30:04.397 { 00:30:04.397 "nbd_device": "/dev/nbd0", 00:30:04.397 "bdev_name": "Malloc0" 00:30:04.397 }, 00:30:04.397 { 00:30:04.397 "nbd_device": "/dev/nbd1", 00:30:04.397 "bdev_name": "Malloc1" 00:30:04.397 } 00:30:04.397 ]' 00:30:04.397 13:51:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:30:04.397 /dev/nbd1' 00:30:04.398 13:51:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:30:04.398 /dev/nbd1' 00:30:04.398 13:51:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:30:04.398 13:51:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:30:04.398 13:51:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:30:04.398 13:51:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:30:04.398 13:51:01 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:30:04.398 13:51:01 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:30:04.398 13:51:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:04.398 13:51:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:30:04.398 13:51:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:30:04.398 13:51:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:30:04.398 13:51:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:30:04.398 13:51:01 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:30:04.398 256+0 records in 00:30:04.398 256+0 records out 00:30:04.398 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00715117 s, 147 MB/s 00:30:04.398 13:51:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:30:04.398 13:51:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:30:04.398 256+0 records in 00:30:04.398 256+0 records out 00:30:04.398 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0250912 s, 41.8 MB/s 00:30:04.398 13:51:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:30:04.398 13:51:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:30:04.398 256+0 records in 00:30:04.398 256+0 records out 00:30:04.398 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0246975 s, 42.5 MB/s 00:30:04.398 13:51:01 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:30:04.398 13:51:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:04.398 13:51:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:30:04.398 13:51:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:30:04.398 13:51:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:30:04.398 13:51:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:30:04.398 13:51:01 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:30:04.398 13:51:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:30:04.398 13:51:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:30:04.398 13:51:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:30:04.398 13:51:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:30:04.398 13:51:01 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:30:04.398 13:51:01 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:30:04.398 13:51:01 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:04.398 13:51:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:04.398 13:51:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:04.398 13:51:01 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:30:04.398 13:51:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:04.398 13:51:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:30:04.666 13:51:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:04.666 13:51:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:04.666 13:51:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:04.666 13:51:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:04.666 13:51:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:04.666 13:51:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:04.666 13:51:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:30:04.666 13:51:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:30:04.666 13:51:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:04.666 13:51:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:30:04.926 13:51:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:04.926 13:51:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:04.926 13:51:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:04.926 13:51:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:04.926 13:51:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:04.926 13:51:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:04.926 13:51:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:30:04.926 13:51:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:30:04.926 13:51:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:30:04.926 13:51:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:04.927 13:51:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:05.186 13:51:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:30:05.186 13:51:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:30:05.186 13:51:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:30:05.186 13:51:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:30:05.186 13:51:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:30:05.186 13:51:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:30:05.186 13:51:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:30:05.186 13:51:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:30:05.186 13:51:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:30:05.186 13:51:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:30:05.186 13:51:02 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:30:05.186 13:51:02 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:30:05.186 13:51:02 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:30:05.445 13:51:02 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:30:05.445 [2024-11-20 13:51:02.739857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:05.704 [2024-11-20 13:51:02.791180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:05.704 [2024-11-20 13:51:02.791182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:05.704 [2024-11-20 13:51:02.831822] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:30:05.704 [2024-11-20 13:51:02.831892] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:30:05.704 [2024-11-20 13:51:02.831900] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:30:09.010 spdk_app_start Round 1 00:30:09.010 13:51:05 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:30:09.010 13:51:05 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:30:09.010 13:51:05 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58444 /var/tmp/spdk-nbd.sock 00:30:09.010 13:51:05 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58444 ']' 00:30:09.010 13:51:05 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:30:09.010 13:51:05 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:09.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:30:09.010 13:51:05 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:30:09.010 13:51:05 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:09.010 13:51:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:30:09.010 13:51:05 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:09.010 13:51:05 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:30:09.010 13:51:05 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:30:09.010 Malloc0 00:30:09.010 13:51:06 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:30:09.270 Malloc1 00:30:09.270 13:51:06 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:30:09.270 13:51:06 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:09.270 13:51:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:30:09.270 13:51:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:30:09.270 13:51:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:09.270 13:51:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:30:09.270 13:51:06 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:30:09.270 13:51:06 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:09.270 13:51:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:30:09.270 13:51:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:09.270 13:51:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:09.270 13:51:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:09.270 13:51:06 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:30:09.270 13:51:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:09.270 13:51:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:09.270 13:51:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:30:09.530 /dev/nbd0 00:30:09.530 13:51:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:09.530 13:51:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:09.530 13:51:06 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:30:09.530 13:51:06 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:30:09.530 13:51:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:09.530 13:51:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:09.530 13:51:06 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:30:09.530 13:51:06 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:30:09.530 13:51:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:09.530 13:51:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:09.530 13:51:06 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:30:09.530 1+0 records in 00:30:09.530 1+0 records out 00:30:09.530 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000424702 s, 9.6 MB/s 00:30:09.530 13:51:06 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:30:09.530 13:51:06 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:30:09.530 13:51:06 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:30:09.530 13:51:06 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:09.530 13:51:06 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:30:09.530 13:51:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:09.530 13:51:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:09.530 13:51:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:30:09.789 /dev/nbd1 00:30:09.789 13:51:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:09.789 13:51:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:09.789 13:51:06 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:30:09.789 13:51:06 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:30:09.789 13:51:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:09.789 13:51:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:09.789 13:51:06 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:30:09.789 13:51:06 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:30:09.789 13:51:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:09.789 13:51:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:09.789 13:51:06 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:30:09.789 1+0 records in 00:30:09.789 1+0 records out 00:30:09.789 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274437 s, 14.9 MB/s 00:30:09.789 13:51:06 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:30:09.789 13:51:06 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:30:09.789 13:51:06 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:30:09.789 13:51:06 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:09.789 13:51:06 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:30:09.789 13:51:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:09.789 13:51:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:09.789 13:51:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:30:09.789 13:51:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:09.789 13:51:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:10.048 13:51:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:30:10.048 { 00:30:10.048 "nbd_device": "/dev/nbd0", 00:30:10.048 "bdev_name": "Malloc0" 00:30:10.048 }, 00:30:10.048 { 00:30:10.048 "nbd_device": "/dev/nbd1", 00:30:10.048 "bdev_name": "Malloc1" 00:30:10.048 } 00:30:10.048 ]' 00:30:10.048 13:51:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:30:10.048 { 00:30:10.048 "nbd_device": "/dev/nbd0", 00:30:10.048 "bdev_name": "Malloc0" 00:30:10.048 }, 00:30:10.048 { 00:30:10.048 "nbd_device": "/dev/nbd1", 00:30:10.048 "bdev_name": "Malloc1" 00:30:10.048 } 00:30:10.048 ]' 00:30:10.048 13:51:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:30:10.048 13:51:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:30:10.048 /dev/nbd1' 00:30:10.048 13:51:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:30:10.048 /dev/nbd1' 00:30:10.048 13:51:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:30:10.048 13:51:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:30:10.048 13:51:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:30:10.048 13:51:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:30:10.048 13:51:07 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:30:10.048 13:51:07 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:30:10.048 13:51:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:10.048 13:51:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:30:10.048 13:51:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:30:10.048 13:51:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:30:10.048 13:51:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:30:10.048 13:51:07 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:30:10.048 256+0 records in 00:30:10.048 256+0 records out 00:30:10.048 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.013256 s, 79.1 MB/s 00:30:10.048 13:51:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:30:10.048 13:51:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:30:10.048 256+0 records in 00:30:10.048 256+0 records out 00:30:10.048 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0242956 s, 43.2 MB/s 00:30:10.048 13:51:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:30:10.048 13:51:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:30:10.048 256+0 records in 00:30:10.048 256+0 records out 00:30:10.048 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0224422 s, 46.7 MB/s 00:30:10.306 13:51:07 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:30:10.306 13:51:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:10.306 13:51:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:30:10.306 13:51:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:30:10.306 13:51:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:30:10.306 13:51:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:30:10.306 13:51:07 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:30:10.306 13:51:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:30:10.306 13:51:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:30:10.306 13:51:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:30:10.306 13:51:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:30:10.306 13:51:07 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:30:10.306 13:51:07 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:30:10.306 13:51:07 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:10.306 13:51:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:10.306 13:51:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:10.306 13:51:07 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:30:10.306 13:51:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:10.306 13:51:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:30:10.565 13:51:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:10.565 13:51:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:10.565 13:51:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:10.565 13:51:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:10.565 13:51:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:10.565 13:51:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:10.565 13:51:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:30:10.565 13:51:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:30:10.565 13:51:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:10.565 13:51:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:30:10.824 13:51:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:10.824 13:51:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:10.824 13:51:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:10.824 13:51:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:10.824 13:51:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:10.824 13:51:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:10.824 13:51:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:30:10.824 13:51:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:30:10.824 13:51:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:30:10.824 13:51:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:10.824 13:51:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:11.083 13:51:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:30:11.083 13:51:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:30:11.083 13:51:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:30:11.083 13:51:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:30:11.083 13:51:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:30:11.083 13:51:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:30:11.083 13:51:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:30:11.083 13:51:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:30:11.083 13:51:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:30:11.083 13:51:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:30:11.083 13:51:08 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:30:11.083 13:51:08 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:30:11.083 13:51:08 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:30:11.341 13:51:08 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:30:11.600 [2024-11-20 13:51:08.712434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:11.600 [2024-11-20 13:51:08.769254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:11.600 [2024-11-20 13:51:08.769257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:11.600 [2024-11-20 13:51:08.813958] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:30:11.600 [2024-11-20 13:51:08.814043] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:30:11.600 [2024-11-20 13:51:08.814053] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:30:14.888 spdk_app_start Round 2 00:30:14.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:30:14.888 13:51:11 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:30:14.888 13:51:11 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:30:14.888 13:51:11 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58444 /var/tmp/spdk-nbd.sock 00:30:14.888 13:51:11 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58444 ']' 00:30:14.888 13:51:11 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:30:14.888 13:51:11 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:14.888 13:51:11 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:30:14.888 13:51:11 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:14.888 13:51:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:30:14.888 13:51:11 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:14.888 13:51:11 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:30:14.888 13:51:11 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:30:14.888 Malloc0 00:30:14.888 13:51:12 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:30:15.147 Malloc1 00:30:15.147 13:51:12 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:30:15.147 13:51:12 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:15.147 13:51:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:30:15.147 13:51:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:30:15.147 13:51:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:15.147 13:51:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:30:15.147 13:51:12 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:30:15.147 13:51:12 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:15.147 13:51:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:30:15.147 13:51:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:15.147 13:51:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:15.147 13:51:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:15.147 13:51:12 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:30:15.147 13:51:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:15.147 13:51:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:15.147 13:51:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:30:15.406 /dev/nbd0 00:30:15.406 13:51:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:15.406 13:51:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:15.406 13:51:12 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:30:15.406 13:51:12 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:30:15.406 13:51:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:15.406 13:51:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:15.406 13:51:12 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:30:15.666 13:51:12 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:30:15.666 13:51:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:15.666 13:51:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:15.666 13:51:12 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:30:15.666 1+0 records in 00:30:15.666 1+0 records out 00:30:15.666 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000371625 s, 11.0 MB/s 00:30:15.666 13:51:12 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:30:15.666 13:51:12 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:30:15.666 13:51:12 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:30:15.666 13:51:12 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:15.666 13:51:12 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:30:15.666 13:51:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:15.666 13:51:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:15.666 13:51:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:30:15.925 /dev/nbd1 00:30:15.925 13:51:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:15.925 13:51:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:15.925 13:51:13 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:30:15.925 13:51:13 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:30:15.925 13:51:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:15.925 13:51:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:15.925 13:51:13 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:30:15.925 13:51:13 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:30:15.925 13:51:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:15.925 13:51:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:15.925 13:51:13 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:30:15.925 1+0 records in 00:30:15.925 1+0 records out 00:30:15.925 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000261339 s, 15.7 MB/s 00:30:15.925 13:51:13 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:30:15.925 13:51:13 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:30:15.925 13:51:13 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:30:15.925 13:51:13 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:15.925 13:51:13 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:30:15.925 13:51:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:15.925 13:51:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:15.925 13:51:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:30:15.925 13:51:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:15.925 13:51:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:16.185 13:51:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:30:16.185 { 00:30:16.185 "nbd_device": "/dev/nbd0", 00:30:16.185 "bdev_name": "Malloc0" 00:30:16.185 }, 00:30:16.185 { 00:30:16.185 "nbd_device": "/dev/nbd1", 00:30:16.185 "bdev_name": "Malloc1" 00:30:16.185 } 00:30:16.185 ]' 00:30:16.185 13:51:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:30:16.185 { 00:30:16.185 "nbd_device": "/dev/nbd0", 00:30:16.185 "bdev_name": "Malloc0" 00:30:16.185 }, 00:30:16.185 { 00:30:16.185 "nbd_device": "/dev/nbd1", 00:30:16.185 "bdev_name": "Malloc1" 00:30:16.185 } 00:30:16.185 ]' 00:30:16.185 13:51:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:30:16.185 13:51:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:30:16.185 /dev/nbd1' 00:30:16.185 13:51:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:30:16.185 /dev/nbd1' 00:30:16.185 13:51:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:30:16.185 13:51:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:30:16.185 13:51:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:30:16.185 13:51:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:30:16.185 13:51:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:30:16.185 13:51:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:30:16.185 13:51:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:16.185 13:51:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:30:16.185 13:51:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:30:16.185 13:51:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:30:16.185 13:51:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:30:16.185 13:51:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:30:16.185 256+0 records in 00:30:16.185 256+0 records out 00:30:16.185 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00576032 s, 182 MB/s 00:30:16.185 13:51:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:30:16.186 13:51:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:30:16.186 256+0 records in 00:30:16.186 256+0 records out 00:30:16.186 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0212305 s, 49.4 MB/s 00:30:16.186 13:51:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:30:16.186 13:51:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:30:16.186 256+0 records in 00:30:16.186 256+0 records out 00:30:16.186 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0229664 s, 45.7 MB/s 00:30:16.186 13:51:13 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:30:16.186 13:51:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:16.186 13:51:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:30:16.186 13:51:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:30:16.186 13:51:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:30:16.186 13:51:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:30:16.186 13:51:13 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:30:16.186 13:51:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:30:16.186 13:51:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:30:16.186 13:51:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:30:16.186 13:51:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:30:16.186 13:51:13 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:30:16.186 13:51:13 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:30:16.186 13:51:13 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:16.186 13:51:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:16.186 13:51:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:16.186 13:51:13 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:30:16.186 13:51:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:16.186 13:51:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:30:16.444 13:51:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:16.444 13:51:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:16.444 13:51:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:16.444 13:51:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:16.444 13:51:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:16.444 13:51:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:16.444 13:51:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:30:16.444 13:51:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:30:16.444 13:51:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:16.444 13:51:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:30:16.703 13:51:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:16.703 13:51:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:16.703 13:51:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:16.703 13:51:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:16.703 13:51:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:16.703 13:51:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:16.703 13:51:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:30:16.703 13:51:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:30:16.703 13:51:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:30:16.703 13:51:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:16.703 13:51:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:16.962 13:51:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:30:16.962 13:51:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:30:16.962 13:51:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:30:17.221 13:51:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:30:17.221 13:51:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:30:17.221 13:51:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:30:17.221 13:51:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:30:17.221 13:51:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:30:17.221 13:51:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:30:17.221 13:51:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:30:17.221 13:51:14 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:30:17.221 13:51:14 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:30:17.221 13:51:14 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:30:17.480 13:51:14 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:30:17.480 [2024-11-20 13:51:14.728652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:17.480 [2024-11-20 13:51:14.784797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:17.480 [2024-11-20 13:51:14.784797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:17.740 [2024-11-20 13:51:14.827853] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:30:17.740 [2024-11-20 13:51:14.828009] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:30:17.740 [2024-11-20 13:51:14.828021] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:30:21.035 13:51:17 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58444 /var/tmp/spdk-nbd.sock 00:30:21.035 13:51:17 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58444 ']' 00:30:21.035 13:51:17 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:30:21.035 13:51:17 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:21.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:30:21.035 13:51:17 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:30:21.035 13:51:17 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:21.035 13:51:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:30:21.035 13:51:17 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:21.035 13:51:17 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:30:21.035 13:51:17 event.app_repeat -- event/event.sh@39 -- # killprocess 58444 00:30:21.035 13:51:17 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58444 ']' 00:30:21.035 13:51:17 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58444 00:30:21.035 13:51:17 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:30:21.035 13:51:17 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:21.035 13:51:17 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58444 00:30:21.035 killing process with pid 58444 00:30:21.035 13:51:17 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:21.035 13:51:17 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:21.035 13:51:17 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58444' 00:30:21.035 13:51:17 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58444 00:30:21.035 13:51:17 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58444 00:30:21.035 spdk_app_start is called in Round 0. 00:30:21.035 Shutdown signal received, stop current app iteration 00:30:21.035 Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 reinitialization... 00:30:21.035 spdk_app_start is called in Round 1. 00:30:21.035 Shutdown signal received, stop current app iteration 00:30:21.035 Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 reinitialization... 00:30:21.035 spdk_app_start is called in Round 2. 00:30:21.035 Shutdown signal received, stop current app iteration 00:30:21.035 Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 reinitialization... 00:30:21.035 spdk_app_start is called in Round 3. 00:30:21.035 Shutdown signal received, stop current app iteration 00:30:21.035 13:51:18 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:30:21.035 ************************************ 00:30:21.035 END TEST app_repeat 00:30:21.035 ************************************ 00:30:21.035 13:51:18 event.app_repeat -- event/event.sh@42 -- # return 0 00:30:21.035 00:30:21.035 real 0m19.072s 00:30:21.035 user 0m42.769s 00:30:21.035 sys 0m3.201s 00:30:21.035 13:51:18 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:21.035 13:51:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:30:21.035 13:51:18 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:30:21.035 13:51:18 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:30:21.035 13:51:18 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:21.035 13:51:18 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:21.035 13:51:18 event -- common/autotest_common.sh@10 -- # set +x 00:30:21.035 ************************************ 00:30:21.035 START TEST cpu_locks 00:30:21.035 ************************************ 00:30:21.035 13:51:18 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:30:21.035 * Looking for test storage... 00:30:21.035 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:30:21.035 13:51:18 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:21.035 13:51:18 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:30:21.035 13:51:18 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:21.035 13:51:18 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:21.035 13:51:18 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:21.035 13:51:18 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:21.035 13:51:18 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:21.035 13:51:18 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:30:21.035 13:51:18 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:30:21.035 13:51:18 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:30:21.035 13:51:18 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:30:21.035 13:51:18 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:30:21.035 13:51:18 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:30:21.035 13:51:18 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:30:21.035 13:51:18 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:21.035 13:51:18 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:30:21.035 13:51:18 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:30:21.035 13:51:18 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:21.035 13:51:18 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:21.035 13:51:18 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:30:21.035 13:51:18 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:30:21.036 13:51:18 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:21.036 13:51:18 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:30:21.036 13:51:18 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:30:21.036 13:51:18 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:30:21.036 13:51:18 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:30:21.036 13:51:18 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:21.036 13:51:18 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:30:21.036 13:51:18 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:30:21.036 13:51:18 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:21.036 13:51:18 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:21.036 13:51:18 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:30:21.036 13:51:18 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:21.036 13:51:18 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:21.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:21.036 --rc genhtml_branch_coverage=1 00:30:21.036 --rc genhtml_function_coverage=1 00:30:21.036 --rc genhtml_legend=1 00:30:21.036 --rc geninfo_all_blocks=1 00:30:21.036 --rc geninfo_unexecuted_blocks=1 00:30:21.036 00:30:21.036 ' 00:30:21.036 13:51:18 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:21.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:21.036 --rc genhtml_branch_coverage=1 00:30:21.036 --rc genhtml_function_coverage=1 00:30:21.036 --rc genhtml_legend=1 00:30:21.036 --rc geninfo_all_blocks=1 00:30:21.036 --rc geninfo_unexecuted_blocks=1 00:30:21.036 00:30:21.036 ' 00:30:21.036 13:51:18 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:21.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:21.036 --rc genhtml_branch_coverage=1 00:30:21.036 --rc genhtml_function_coverage=1 00:30:21.036 --rc genhtml_legend=1 00:30:21.036 --rc geninfo_all_blocks=1 00:30:21.036 --rc geninfo_unexecuted_blocks=1 00:30:21.036 00:30:21.036 ' 00:30:21.036 13:51:18 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:21.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:21.036 --rc genhtml_branch_coverage=1 00:30:21.036 --rc genhtml_function_coverage=1 00:30:21.036 --rc genhtml_legend=1 00:30:21.036 --rc geninfo_all_blocks=1 00:30:21.036 --rc geninfo_unexecuted_blocks=1 00:30:21.036 00:30:21.036 ' 00:30:21.036 13:51:18 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:30:21.036 13:51:18 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:30:21.036 13:51:18 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:30:21.036 13:51:18 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:30:21.036 13:51:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:21.036 13:51:18 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:21.036 13:51:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:30:21.036 ************************************ 00:30:21.036 START TEST default_locks 00:30:21.036 ************************************ 00:30:21.036 13:51:18 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:30:21.036 13:51:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58885 00:30:21.036 13:51:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58885 00:30:21.036 13:51:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:30:21.036 13:51:18 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58885 ']' 00:30:21.036 13:51:18 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:21.036 13:51:18 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:21.036 13:51:18 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:21.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:21.036 13:51:18 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:21.036 13:51:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:30:21.295 [2024-11-20 13:51:18.391142] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:30:21.295 [2024-11-20 13:51:18.391298] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58885 ] 00:30:21.295 [2024-11-20 13:51:18.539882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:21.295 [2024-11-20 13:51:18.595714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:21.554 [2024-11-20 13:51:18.654448] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:30:22.119 13:51:19 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:22.119 13:51:19 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:30:22.119 13:51:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58885 00:30:22.119 13:51:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58885 00:30:22.120 13:51:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:30:22.377 13:51:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58885 00:30:22.377 13:51:19 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58885 ']' 00:30:22.377 13:51:19 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58885 00:30:22.377 13:51:19 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:30:22.377 13:51:19 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:22.377 13:51:19 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58885 00:30:22.377 13:51:19 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:22.377 13:51:19 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:22.377 13:51:19 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58885' 00:30:22.377 killing process with pid 58885 00:30:22.377 13:51:19 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58885 00:30:22.377 13:51:19 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58885 00:30:22.944 13:51:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58885 00:30:22.944 13:51:20 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:30:22.944 13:51:20 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58885 00:30:22.944 13:51:20 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:30:22.944 13:51:20 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:22.944 13:51:20 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:30:22.944 13:51:20 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:22.944 13:51:20 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58885 00:30:22.944 13:51:20 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58885 ']' 00:30:22.944 13:51:20 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:22.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:22.944 13:51:20 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:22.944 13:51:20 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:22.944 13:51:20 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:22.944 13:51:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:30:22.944 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58885) - No such process 00:30:22.944 ERROR: process (pid: 58885) is no longer running 00:30:22.944 13:51:20 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:22.944 13:51:20 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:30:22.945 13:51:20 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:30:22.945 13:51:20 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:22.945 13:51:20 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:22.945 13:51:20 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:22.945 13:51:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:30:22.945 13:51:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:30:22.945 13:51:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:30:22.945 13:51:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:30:22.945 00:30:22.945 real 0m1.685s 00:30:22.945 user 0m1.806s 00:30:22.945 sys 0m0.472s 00:30:22.945 13:51:20 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:22.945 13:51:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:30:22.945 ************************************ 00:30:22.945 END TEST default_locks 00:30:22.945 ************************************ 00:30:22.945 13:51:20 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:30:22.945 13:51:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:22.945 13:51:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:22.945 13:51:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:30:22.945 ************************************ 00:30:22.945 START TEST default_locks_via_rpc 00:30:22.945 ************************************ 00:30:22.945 13:51:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:30:22.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:22.945 13:51:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58937 00:30:22.945 13:51:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58937 00:30:22.945 13:51:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58937 ']' 00:30:22.945 13:51:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:22.945 13:51:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:22.945 13:51:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:22.945 13:51:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:22.945 13:51:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:22.945 13:51:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:30:22.945 [2024-11-20 13:51:20.132418] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:30:22.945 [2024-11-20 13:51:20.132489] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58937 ] 00:30:23.205 [2024-11-20 13:51:20.284565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:23.205 [2024-11-20 13:51:20.340826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:23.205 [2024-11-20 13:51:20.400100] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:30:23.813 13:51:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:23.813 13:51:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:30:23.813 13:51:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:30:23.813 13:51:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.813 13:51:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:23.813 13:51:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.813 13:51:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:30:23.813 13:51:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:30:23.813 13:51:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:30:23.813 13:51:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:30:23.813 13:51:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:30:23.813 13:51:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.813 13:51:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:23.813 13:51:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.813 13:51:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58937 00:30:23.813 13:51:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58937 00:30:23.813 13:51:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:30:24.381 13:51:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58937 00:30:24.381 13:51:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58937 ']' 00:30:24.381 13:51:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58937 00:30:24.381 13:51:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:30:24.381 13:51:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:24.381 13:51:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58937 00:30:24.381 killing process with pid 58937 00:30:24.381 13:51:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:24.381 13:51:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:24.381 13:51:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58937' 00:30:24.381 13:51:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58937 00:30:24.381 13:51:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58937 00:30:24.640 ************************************ 00:30:24.640 END TEST default_locks_via_rpc 00:30:24.640 ************************************ 00:30:24.640 00:30:24.640 real 0m1.766s 00:30:24.640 user 0m1.890s 00:30:24.640 sys 0m0.536s 00:30:24.640 13:51:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:24.640 13:51:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:24.640 13:51:21 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:30:24.640 13:51:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:24.640 13:51:21 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:24.640 13:51:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:30:24.640 ************************************ 00:30:24.640 START TEST non_locking_app_on_locked_coremask 00:30:24.640 ************************************ 00:30:24.640 13:51:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:30:24.640 13:51:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58977 00:30:24.640 13:51:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:30:24.640 13:51:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58977 /var/tmp/spdk.sock 00:30:24.640 13:51:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58977 ']' 00:30:24.640 13:51:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:24.640 13:51:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:24.640 13:51:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:24.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:24.640 13:51:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:24.640 13:51:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:30:24.640 [2024-11-20 13:51:21.955738] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:30:24.640 [2024-11-20 13:51:21.956244] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58977 ] 00:30:24.898 [2024-11-20 13:51:22.093108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:24.898 [2024-11-20 13:51:22.161880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:25.157 [2024-11-20 13:51:22.223294] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:30:25.157 13:51:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:25.157 13:51:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:30:25.157 13:51:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58991 00:30:25.157 13:51:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:30:25.157 13:51:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58991 /var/tmp/spdk2.sock 00:30:25.157 13:51:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58991 ']' 00:30:25.157 13:51:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:30:25.157 13:51:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:25.157 13:51:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:30:25.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:30:25.157 13:51:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:25.157 13:51:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:30:25.157 [2024-11-20 13:51:22.456609] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:30:25.157 [2024-11-20 13:51:22.456780] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58991 ] 00:30:25.416 [2024-11-20 13:51:22.603454] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:30:25.416 [2024-11-20 13:51:22.603497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:25.416 [2024-11-20 13:51:22.720705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:25.675 [2024-11-20 13:51:22.839383] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:30:26.241 13:51:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:26.241 13:51:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:30:26.241 13:51:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58977 00:30:26.241 13:51:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58977 00:30:26.241 13:51:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:30:26.807 13:51:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58977 00:30:26.807 13:51:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58977 ']' 00:30:26.808 13:51:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58977 00:30:26.808 13:51:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:30:26.808 13:51:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:26.808 13:51:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58977 00:30:26.808 killing process with pid 58977 00:30:26.808 13:51:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:26.808 13:51:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:26.808 13:51:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58977' 00:30:26.808 13:51:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58977 00:30:26.808 13:51:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58977 00:30:27.375 13:51:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58991 00:30:27.375 13:51:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58991 ']' 00:30:27.375 13:51:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58991 00:30:27.375 13:51:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:30:27.375 13:51:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:27.375 13:51:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58991 00:30:27.375 killing process with pid 58991 00:30:27.375 13:51:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:27.375 13:51:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:27.375 13:51:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58991' 00:30:27.375 13:51:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58991 00:30:27.375 13:51:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58991 00:30:27.634 ************************************ 00:30:27.634 END TEST non_locking_app_on_locked_coremask 00:30:27.634 ************************************ 00:30:27.634 00:30:27.634 real 0m3.008s 00:30:27.634 user 0m3.297s 00:30:27.634 sys 0m0.892s 00:30:27.634 13:51:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:27.634 13:51:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:30:27.910 13:51:24 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:30:27.910 13:51:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:27.910 13:51:24 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:27.910 13:51:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:30:27.910 ************************************ 00:30:27.910 START TEST locking_app_on_unlocked_coremask 00:30:27.910 ************************************ 00:30:27.910 13:51:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:30:27.910 13:51:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59047 00:30:27.910 13:51:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59047 /var/tmp/spdk.sock 00:30:27.910 13:51:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:30:27.910 13:51:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59047 ']' 00:30:27.910 13:51:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:27.910 13:51:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:27.910 13:51:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:27.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:27.910 13:51:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:27.910 13:51:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:30:27.910 [2024-11-20 13:51:25.029425] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:30:27.910 [2024-11-20 13:51:25.029496] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59047 ] 00:30:27.910 [2024-11-20 13:51:25.181334] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:30:27.910 [2024-11-20 13:51:25.181394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:28.177 [2024-11-20 13:51:25.240187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:28.177 [2024-11-20 13:51:25.301890] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:30:28.746 13:51:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:28.746 13:51:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:30:28.746 13:51:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59063 00:30:28.746 13:51:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:30:28.746 13:51:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59063 /var/tmp/spdk2.sock 00:30:28.746 13:51:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59063 ']' 00:30:28.746 13:51:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:30:28.746 13:51:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:28.746 13:51:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:30:28.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:30:28.746 13:51:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:28.746 13:51:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:30:28.746 [2024-11-20 13:51:26.018645] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:30:28.746 [2024-11-20 13:51:26.018883] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59063 ] 00:30:29.004 [2024-11-20 13:51:26.194288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:29.262 [2024-11-20 13:51:26.330080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:29.262 [2024-11-20 13:51:26.446164] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:30:29.830 13:51:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:29.830 13:51:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:30:29.830 13:51:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59063 00:30:29.830 13:51:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59063 00:30:29.830 13:51:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:30:30.399 13:51:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59047 00:30:30.399 13:51:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59047 ']' 00:30:30.399 13:51:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59047 00:30:30.399 13:51:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:30:30.399 13:51:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:30.399 13:51:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59047 00:30:30.399 13:51:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:30.399 13:51:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:30.399 13:51:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59047' 00:30:30.399 killing process with pid 59047 00:30:30.399 13:51:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59047 00:30:30.399 13:51:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59047 00:30:30.969 13:51:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59063 00:30:30.969 13:51:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59063 ']' 00:30:30.969 13:51:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59063 00:30:30.969 13:51:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:30:30.969 13:51:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:30.969 13:51:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59063 00:30:30.969 killing process with pid 59063 00:30:30.969 13:51:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:30.969 13:51:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:30.969 13:51:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59063' 00:30:30.969 13:51:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59063 00:30:30.969 13:51:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59063 00:30:31.230 ************************************ 00:30:31.230 END TEST locking_app_on_unlocked_coremask 00:30:31.230 ************************************ 00:30:31.230 00:30:31.230 real 0m3.514s 00:30:31.230 user 0m3.951s 00:30:31.230 sys 0m0.917s 00:30:31.230 13:51:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:31.230 13:51:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:30:31.230 13:51:28 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:30:31.230 13:51:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:31.230 13:51:28 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:31.230 13:51:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:30:31.230 ************************************ 00:30:31.230 START TEST locking_app_on_locked_coremask 00:30:31.230 ************************************ 00:30:31.230 13:51:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:30:31.230 13:51:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59130 00:30:31.230 13:51:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59130 /var/tmp/spdk.sock 00:30:31.230 13:51:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:30:31.230 13:51:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59130 ']' 00:30:31.230 13:51:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:31.230 13:51:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:31.489 13:51:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:31.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:31.489 13:51:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:31.489 13:51:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:30:31.489 [2024-11-20 13:51:28.606086] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:30:31.490 [2024-11-20 13:51:28.606233] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59130 ] 00:30:31.490 [2024-11-20 13:51:28.755137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:31.749 [2024-11-20 13:51:28.811975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:31.749 [2024-11-20 13:51:28.869593] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:30:32.318 13:51:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:32.318 13:51:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:30:32.318 13:51:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59146 00:30:32.318 13:51:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:30:32.318 13:51:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59146 /var/tmp/spdk2.sock 00:30:32.318 13:51:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:30:32.318 13:51:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59146 /var/tmp/spdk2.sock 00:30:32.318 13:51:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:30:32.318 13:51:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:32.318 13:51:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:30:32.318 13:51:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:32.318 13:51:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59146 /var/tmp/spdk2.sock 00:30:32.318 13:51:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59146 ']' 00:30:32.318 13:51:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:30:32.318 13:51:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:32.318 13:51:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:30:32.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:30:32.318 13:51:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:32.318 13:51:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:30:32.318 [2024-11-20 13:51:29.576949] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:30:32.318 [2024-11-20 13:51:29.577106] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59146 ] 00:30:32.577 [2024-11-20 13:51:29.721567] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59130 has claimed it. 00:30:32.577 [2024-11-20 13:51:29.721647] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:30:33.143 ERROR: process (pid: 59146) is no longer running 00:30:33.143 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59146) - No such process 00:30:33.143 13:51:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:33.143 13:51:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:30:33.143 13:51:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:30:33.143 13:51:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:33.143 13:51:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:33.143 13:51:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:33.143 13:51:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59130 00:30:33.143 13:51:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59130 00:30:33.143 13:51:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:30:33.401 13:51:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59130 00:30:33.401 13:51:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59130 ']' 00:30:33.401 13:51:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59130 00:30:33.401 13:51:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:30:33.401 13:51:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:33.401 13:51:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59130 00:30:33.401 13:51:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:33.401 13:51:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:33.401 13:51:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59130' 00:30:33.401 killing process with pid 59130 00:30:33.401 13:51:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59130 00:30:33.401 13:51:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59130 00:30:33.967 00:30:33.967 real 0m2.455s 00:30:33.967 user 0m2.866s 00:30:33.967 sys 0m0.547s 00:30:33.967 13:51:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:33.967 13:51:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:30:33.967 ************************************ 00:30:33.967 END TEST locking_app_on_locked_coremask 00:30:33.967 ************************************ 00:30:33.967 13:51:31 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:30:33.967 13:51:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:33.967 13:51:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:33.967 13:51:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:30:33.967 ************************************ 00:30:33.967 START TEST locking_overlapped_coremask 00:30:33.967 ************************************ 00:30:33.967 13:51:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:30:33.967 13:51:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59186 00:30:33.967 13:51:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59186 /var/tmp/spdk.sock 00:30:33.967 13:51:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:30:33.967 13:51:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59186 ']' 00:30:33.967 13:51:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:33.968 13:51:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:33.968 13:51:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:33.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:33.968 13:51:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:33.968 13:51:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:30:33.968 [2024-11-20 13:51:31.096625] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:30:33.968 [2024-11-20 13:51:31.096858] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59186 ] 00:30:33.968 [2024-11-20 13:51:31.234895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:34.225 [2024-11-20 13:51:31.293816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:34.225 [2024-11-20 13:51:31.293872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:34.225 [2024-11-20 13:51:31.293875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:34.225 [2024-11-20 13:51:31.352758] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:30:34.790 13:51:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:34.790 13:51:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:30:34.790 13:51:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59204 00:30:34.790 13:51:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59204 /var/tmp/spdk2.sock 00:30:34.790 13:51:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:30:34.790 13:51:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:30:34.790 13:51:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59204 /var/tmp/spdk2.sock 00:30:34.790 13:51:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:30:34.790 13:51:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:34.790 13:51:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:30:34.790 13:51:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:34.790 13:51:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59204 /var/tmp/spdk2.sock 00:30:34.790 13:51:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59204 ']' 00:30:34.790 13:51:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:30:34.790 13:51:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:34.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:30:34.790 13:51:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:30:34.790 13:51:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:34.790 13:51:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:30:34.790 [2024-11-20 13:51:32.056904] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:30:34.790 [2024-11-20 13:51:32.056983] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59204 ] 00:30:35.047 [2024-11-20 13:51:32.208847] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59186 has claimed it. 00:30:35.047 [2024-11-20 13:51:32.208922] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:30:35.612 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59204) - No such process 00:30:35.612 ERROR: process (pid: 59204) is no longer running 00:30:35.612 13:51:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:35.612 13:51:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:30:35.612 13:51:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:30:35.612 13:51:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:35.612 13:51:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:35.612 13:51:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:35.612 13:51:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:30:35.612 13:51:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:30:35.612 13:51:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:30:35.612 13:51:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:30:35.612 13:51:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59186 00:30:35.612 13:51:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59186 ']' 00:30:35.612 13:51:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59186 00:30:35.612 13:51:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:30:35.612 13:51:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:35.612 13:51:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59186 00:30:35.612 13:51:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:35.612 13:51:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:35.612 killing process with pid 59186 00:30:35.612 13:51:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59186' 00:30:35.612 13:51:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59186 00:30:35.612 13:51:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59186 00:30:35.897 00:30:35.897 real 0m2.061s 00:30:35.897 user 0m5.806s 00:30:35.897 sys 0m0.348s 00:30:35.897 13:51:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:35.897 13:51:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:30:35.897 ************************************ 00:30:35.897 END TEST locking_overlapped_coremask 00:30:35.897 ************************************ 00:30:35.897 13:51:33 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:30:35.898 13:51:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:35.898 13:51:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:35.898 13:51:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:30:35.898 ************************************ 00:30:35.898 START TEST locking_overlapped_coremask_via_rpc 00:30:35.898 ************************************ 00:30:35.898 13:51:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:30:35.898 13:51:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59244 00:30:35.898 13:51:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59244 /var/tmp/spdk.sock 00:30:35.898 13:51:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:30:35.898 13:51:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59244 ']' 00:30:35.898 13:51:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:35.898 13:51:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:35.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:35.898 13:51:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:35.898 13:51:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:35.898 13:51:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:35.898 [2024-11-20 13:51:33.192741] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:30:35.898 [2024-11-20 13:51:33.192841] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59244 ] 00:30:36.163 [2024-11-20 13:51:33.338461] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:30:36.163 [2024-11-20 13:51:33.338519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:36.163 [2024-11-20 13:51:33.399050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:36.163 [2024-11-20 13:51:33.398958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:36.163 [2024-11-20 13:51:33.399031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:36.163 [2024-11-20 13:51:33.459976] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:30:37.095 13:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:37.096 13:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:30:37.096 13:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59262 00:30:37.096 13:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59262 /var/tmp/spdk2.sock 00:30:37.096 13:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:30:37.096 13:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59262 ']' 00:30:37.096 13:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:30:37.096 13:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:37.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:30:37.096 13:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:30:37.096 13:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:37.096 13:51:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:37.096 [2024-11-20 13:51:34.184816] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:30:37.096 [2024-11-20 13:51:34.185241] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59262 ] 00:30:37.096 [2024-11-20 13:51:34.340551] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:30:37.096 [2024-11-20 13:51:34.340615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:37.353 [2024-11-20 13:51:34.466970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:37.353 [2024-11-20 13:51:34.470771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:37.353 [2024-11-20 13:51:34.470773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:37.353 [2024-11-20 13:51:34.598018] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:30:37.918 13:51:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:37.918 13:51:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:30:37.918 13:51:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:30:37.918 13:51:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.918 13:51:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:37.918 13:51:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.918 13:51:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:30:37.918 13:51:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:30:37.918 13:51:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:30:37.918 13:51:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:37.918 13:51:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:37.918 13:51:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:37.918 13:51:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:37.918 13:51:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:30:37.918 13:51:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.918 13:51:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:37.918 [2024-11-20 13:51:35.134853] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59244 has claimed it. 00:30:37.918 request: 00:30:37.918 { 00:30:37.918 "method": "framework_enable_cpumask_locks", 00:30:37.918 "req_id": 1 00:30:37.918 } 00:30:37.918 Got JSON-RPC error response 00:30:37.918 response: 00:30:37.918 { 00:30:37.918 "code": -32603, 00:30:37.918 "message": "Failed to claim CPU core: 2" 00:30:37.918 } 00:30:37.918 13:51:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:37.918 13:51:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:30:37.918 13:51:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:37.918 13:51:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:37.918 13:51:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:37.918 13:51:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59244 /var/tmp/spdk.sock 00:30:37.918 13:51:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59244 ']' 00:30:37.918 13:51:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:37.918 13:51:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:37.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:37.918 13:51:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:37.918 13:51:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:37.918 13:51:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:38.176 13:51:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:38.176 13:51:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:30:38.176 13:51:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59262 /var/tmp/spdk2.sock 00:30:38.176 13:51:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59262 ']' 00:30:38.176 13:51:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:30:38.176 13:51:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:38.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:30:38.176 13:51:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:30:38.176 13:51:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:38.176 13:51:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:38.434 13:51:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:38.434 13:51:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:30:38.434 13:51:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:30:38.434 13:51:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:30:38.434 13:51:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:30:38.434 13:51:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:30:38.434 00:30:38.434 real 0m2.507s 00:30:38.434 user 0m1.291s 00:30:38.434 sys 0m0.139s 00:30:38.434 13:51:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:38.434 13:51:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:38.434 ************************************ 00:30:38.434 END TEST locking_overlapped_coremask_via_rpc 00:30:38.434 ************************************ 00:30:38.434 13:51:35 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:30:38.434 13:51:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59244 ]] 00:30:38.434 13:51:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59244 00:30:38.434 13:51:35 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59244 ']' 00:30:38.434 13:51:35 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59244 00:30:38.434 13:51:35 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:30:38.434 13:51:35 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:38.434 13:51:35 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59244 00:30:38.434 13:51:35 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:38.434 13:51:35 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:38.434 killing process with pid 59244 00:30:38.434 13:51:35 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59244' 00:30:38.434 13:51:35 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59244 00:30:38.434 13:51:35 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59244 00:30:39.000 13:51:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59262 ]] 00:30:39.000 13:51:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59262 00:30:39.000 13:51:36 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59262 ']' 00:30:39.000 13:51:36 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59262 00:30:39.000 13:51:36 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:30:39.000 13:51:36 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:39.000 13:51:36 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59262 00:30:39.000 13:51:36 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:30:39.000 13:51:36 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:30:39.000 killing process with pid 59262 00:30:39.000 13:51:36 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59262' 00:30:39.000 13:51:36 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59262 00:30:39.000 13:51:36 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59262 00:30:39.260 13:51:36 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:30:39.260 13:51:36 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:30:39.260 13:51:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59244 ]] 00:30:39.260 13:51:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59244 00:30:39.260 13:51:36 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59244 ']' 00:30:39.260 13:51:36 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59244 00:30:39.260 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59244) - No such process 00:30:39.260 Process with pid 59244 is not found 00:30:39.260 13:51:36 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59244 is not found' 00:30:39.260 13:51:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59262 ]] 00:30:39.260 13:51:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59262 00:30:39.260 13:51:36 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59262 ']' 00:30:39.260 13:51:36 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59262 00:30:39.260 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59262) - No such process 00:30:39.260 Process with pid 59262 is not found 00:30:39.260 13:51:36 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59262 is not found' 00:30:39.260 13:51:36 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:30:39.260 00:30:39.260 real 0m18.301s 00:30:39.260 user 0m32.752s 00:30:39.260 sys 0m4.710s 00:30:39.260 13:51:36 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:39.260 13:51:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:30:39.260 ************************************ 00:30:39.260 END TEST cpu_locks 00:30:39.260 ************************************ 00:30:39.260 00:30:39.260 real 0m47.948s 00:30:39.260 user 1m35.445s 00:30:39.260 sys 0m8.905s 00:30:39.260 13:51:36 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:39.260 13:51:36 event -- common/autotest_common.sh@10 -- # set +x 00:30:39.260 ************************************ 00:30:39.260 END TEST event 00:30:39.260 ************************************ 00:30:39.260 13:51:36 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:30:39.260 13:51:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:39.260 13:51:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:39.260 13:51:36 -- common/autotest_common.sh@10 -- # set +x 00:30:39.260 ************************************ 00:30:39.260 START TEST thread 00:30:39.260 ************************************ 00:30:39.260 13:51:36 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:30:39.519 * Looking for test storage... 00:30:39.519 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:30:39.519 13:51:36 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:39.519 13:51:36 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:30:39.519 13:51:36 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:39.519 13:51:36 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:39.519 13:51:36 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:39.519 13:51:36 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:39.519 13:51:36 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:39.519 13:51:36 thread -- scripts/common.sh@336 -- # IFS=.-: 00:30:39.519 13:51:36 thread -- scripts/common.sh@336 -- # read -ra ver1 00:30:39.519 13:51:36 thread -- scripts/common.sh@337 -- # IFS=.-: 00:30:39.519 13:51:36 thread -- scripts/common.sh@337 -- # read -ra ver2 00:30:39.519 13:51:36 thread -- scripts/common.sh@338 -- # local 'op=<' 00:30:39.519 13:51:36 thread -- scripts/common.sh@340 -- # ver1_l=2 00:30:39.519 13:51:36 thread -- scripts/common.sh@341 -- # ver2_l=1 00:30:39.519 13:51:36 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:39.519 13:51:36 thread -- scripts/common.sh@344 -- # case "$op" in 00:30:39.519 13:51:36 thread -- scripts/common.sh@345 -- # : 1 00:30:39.519 13:51:36 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:39.519 13:51:36 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:39.519 13:51:36 thread -- scripts/common.sh@365 -- # decimal 1 00:30:39.519 13:51:36 thread -- scripts/common.sh@353 -- # local d=1 00:30:39.519 13:51:36 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:39.519 13:51:36 thread -- scripts/common.sh@355 -- # echo 1 00:30:39.519 13:51:36 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:30:39.519 13:51:36 thread -- scripts/common.sh@366 -- # decimal 2 00:30:39.519 13:51:36 thread -- scripts/common.sh@353 -- # local d=2 00:30:39.519 13:51:36 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:39.519 13:51:36 thread -- scripts/common.sh@355 -- # echo 2 00:30:39.519 13:51:36 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:30:39.519 13:51:36 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:39.519 13:51:36 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:39.519 13:51:36 thread -- scripts/common.sh@368 -- # return 0 00:30:39.519 13:51:36 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:39.519 13:51:36 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:39.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:39.519 --rc genhtml_branch_coverage=1 00:30:39.519 --rc genhtml_function_coverage=1 00:30:39.519 --rc genhtml_legend=1 00:30:39.519 --rc geninfo_all_blocks=1 00:30:39.519 --rc geninfo_unexecuted_blocks=1 00:30:39.519 00:30:39.519 ' 00:30:39.519 13:51:36 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:39.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:39.519 --rc genhtml_branch_coverage=1 00:30:39.519 --rc genhtml_function_coverage=1 00:30:39.519 --rc genhtml_legend=1 00:30:39.519 --rc geninfo_all_blocks=1 00:30:39.519 --rc geninfo_unexecuted_blocks=1 00:30:39.519 00:30:39.519 ' 00:30:39.519 13:51:36 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:39.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:39.519 --rc genhtml_branch_coverage=1 00:30:39.519 --rc genhtml_function_coverage=1 00:30:39.519 --rc genhtml_legend=1 00:30:39.519 --rc geninfo_all_blocks=1 00:30:39.519 --rc geninfo_unexecuted_blocks=1 00:30:39.519 00:30:39.519 ' 00:30:39.519 13:51:36 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:39.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:39.519 --rc genhtml_branch_coverage=1 00:30:39.519 --rc genhtml_function_coverage=1 00:30:39.519 --rc genhtml_legend=1 00:30:39.519 --rc geninfo_all_blocks=1 00:30:39.519 --rc geninfo_unexecuted_blocks=1 00:30:39.519 00:30:39.519 ' 00:30:39.519 13:51:36 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:30:39.519 13:51:36 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:30:39.519 13:51:36 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:39.519 13:51:36 thread -- common/autotest_common.sh@10 -- # set +x 00:30:39.519 ************************************ 00:30:39.519 START TEST thread_poller_perf 00:30:39.519 ************************************ 00:30:39.519 13:51:36 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:30:39.519 [2024-11-20 13:51:36.790164] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:30:39.519 [2024-11-20 13:51:36.790260] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59398 ] 00:30:39.779 [2024-11-20 13:51:36.937414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:39.779 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:30:39.780 [2024-11-20 13:51:36.991341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:41.155 [2024-11-20T13:51:38.478Z] ====================================== 00:30:41.155 [2024-11-20T13:51:38.478Z] busy:2298691550 (cyc) 00:30:41.155 [2024-11-20T13:51:38.478Z] total_run_count: 325000 00:30:41.155 [2024-11-20T13:51:38.478Z] tsc_hz: 2290000000 (cyc) 00:30:41.155 [2024-11-20T13:51:38.478Z] ====================================== 00:30:41.155 [2024-11-20T13:51:38.478Z] poller_cost: 7072 (cyc), 3088 (nsec) 00:30:41.155 00:30:41.155 real 0m1.287s 00:30:41.155 user 0m1.134s 00:30:41.155 sys 0m0.046s 00:30:41.155 13:51:38 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:41.155 13:51:38 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:30:41.155 ************************************ 00:30:41.155 END TEST thread_poller_perf 00:30:41.155 ************************************ 00:30:41.155 13:51:38 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:30:41.155 13:51:38 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:30:41.155 13:51:38 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:41.155 13:51:38 thread -- common/autotest_common.sh@10 -- # set +x 00:30:41.155 ************************************ 00:30:41.155 START TEST thread_poller_perf 00:30:41.155 ************************************ 00:30:41.155 13:51:38 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:30:41.155 [2024-11-20 13:51:38.128516] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:30:41.155 [2024-11-20 13:51:38.128614] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59428 ] 00:30:41.155 [2024-11-20 13:51:38.281211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:41.155 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:30:41.155 [2024-11-20 13:51:38.337057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:42.103 [2024-11-20T13:51:39.426Z] ====================================== 00:30:42.103 [2024-11-20T13:51:39.426Z] busy:2291877610 (cyc) 00:30:42.103 [2024-11-20T13:51:39.426Z] total_run_count: 4100000 00:30:42.103 [2024-11-20T13:51:39.426Z] tsc_hz: 2290000000 (cyc) 00:30:42.103 [2024-11-20T13:51:39.426Z] ====================================== 00:30:42.103 [2024-11-20T13:51:39.426Z] poller_cost: 558 (cyc), 243 (nsec) 00:30:42.103 00:30:42.103 real 0m1.282s 00:30:42.103 user 0m1.130s 00:30:42.103 sys 0m0.046s 00:30:42.103 13:51:39 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:42.103 ************************************ 00:30:42.103 END TEST thread_poller_perf 00:30:42.103 ************************************ 00:30:42.103 13:51:39 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:30:42.362 13:51:39 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:30:42.362 00:30:42.362 real 0m2.917s 00:30:42.362 user 0m2.432s 00:30:42.362 sys 0m0.296s 00:30:42.362 13:51:39 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:42.362 13:51:39 thread -- common/autotest_common.sh@10 -- # set +x 00:30:42.362 ************************************ 00:30:42.362 END TEST thread 00:30:42.362 ************************************ 00:30:42.362 13:51:39 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:30:42.362 13:51:39 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:30:42.362 13:51:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:42.362 13:51:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:42.362 13:51:39 -- common/autotest_common.sh@10 -- # set +x 00:30:42.362 ************************************ 00:30:42.362 START TEST app_cmdline 00:30:42.362 ************************************ 00:30:42.362 13:51:39 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:30:42.362 * Looking for test storage... 00:30:42.362 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:30:42.362 13:51:39 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:42.362 13:51:39 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:30:42.362 13:51:39 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:42.362 13:51:39 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:42.362 13:51:39 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:42.362 13:51:39 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:42.362 13:51:39 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:42.362 13:51:39 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:30:42.362 13:51:39 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:30:42.362 13:51:39 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:30:42.362 13:51:39 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:30:42.363 13:51:39 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:30:42.363 13:51:39 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:30:42.363 13:51:39 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:30:42.363 13:51:39 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:42.363 13:51:39 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:30:42.363 13:51:39 app_cmdline -- scripts/common.sh@345 -- # : 1 00:30:42.363 13:51:39 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:42.363 13:51:39 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:42.363 13:51:39 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:30:42.363 13:51:39 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:30:42.363 13:51:39 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:42.363 13:51:39 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:30:42.363 13:51:39 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:30:42.363 13:51:39 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:30:42.363 13:51:39 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:30:42.363 13:51:39 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:42.363 13:51:39 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:30:42.363 13:51:39 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:30:42.363 13:51:39 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:42.363 13:51:39 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:42.363 13:51:39 app_cmdline -- scripts/common.sh@368 -- # return 0 00:30:42.363 13:51:39 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:42.363 13:51:39 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:42.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:42.363 --rc genhtml_branch_coverage=1 00:30:42.363 --rc genhtml_function_coverage=1 00:30:42.363 --rc genhtml_legend=1 00:30:42.363 --rc geninfo_all_blocks=1 00:30:42.363 --rc geninfo_unexecuted_blocks=1 00:30:42.363 00:30:42.363 ' 00:30:42.363 13:51:39 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:42.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:42.363 --rc genhtml_branch_coverage=1 00:30:42.363 --rc genhtml_function_coverage=1 00:30:42.363 --rc genhtml_legend=1 00:30:42.363 --rc geninfo_all_blocks=1 00:30:42.363 --rc geninfo_unexecuted_blocks=1 00:30:42.363 00:30:42.363 ' 00:30:42.363 13:51:39 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:42.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:42.363 --rc genhtml_branch_coverage=1 00:30:42.363 --rc genhtml_function_coverage=1 00:30:42.363 --rc genhtml_legend=1 00:30:42.363 --rc geninfo_all_blocks=1 00:30:42.363 --rc geninfo_unexecuted_blocks=1 00:30:42.363 00:30:42.363 ' 00:30:42.363 13:51:39 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:42.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:42.363 --rc genhtml_branch_coverage=1 00:30:42.363 --rc genhtml_function_coverage=1 00:30:42.363 --rc genhtml_legend=1 00:30:42.363 --rc geninfo_all_blocks=1 00:30:42.363 --rc geninfo_unexecuted_blocks=1 00:30:42.363 00:30:42.363 ' 00:30:42.363 13:51:39 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:30:42.363 13:51:39 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59511 00:30:42.363 13:51:39 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59511 00:30:42.363 13:51:39 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59511 ']' 00:30:42.363 13:51:39 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:42.363 13:51:39 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:42.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:42.363 13:51:39 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:42.363 13:51:39 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:30:42.363 13:51:39 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:42.363 13:51:39 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:30:42.623 [2024-11-20 13:51:39.739643] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:30:42.623 [2024-11-20 13:51:39.739743] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59511 ] 00:30:42.623 [2024-11-20 13:51:39.876479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:42.623 [2024-11-20 13:51:39.933255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:42.882 [2024-11-20 13:51:39.993363] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:30:43.450 13:51:40 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:43.450 13:51:40 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:30:43.450 13:51:40 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:30:43.708 { 00:30:43.708 "version": "SPDK v25.01-pre git sha1 f9d18d578", 00:30:43.708 "fields": { 00:30:43.708 "major": 25, 00:30:43.708 "minor": 1, 00:30:43.708 "patch": 0, 00:30:43.708 "suffix": "-pre", 00:30:43.708 "commit": "f9d18d578" 00:30:43.708 } 00:30:43.708 } 00:30:43.708 13:51:40 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:30:43.708 13:51:40 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:30:43.708 13:51:40 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:30:43.708 13:51:40 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:30:43.708 13:51:40 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:30:43.708 13:51:40 app_cmdline -- app/cmdline.sh@26 -- # sort 00:30:43.708 13:51:40 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:30:43.708 13:51:40 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.708 13:51:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:30:43.708 13:51:40 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.708 13:51:40 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:30:43.708 13:51:40 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:30:43.708 13:51:40 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:30:43.708 13:51:40 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:30:43.708 13:51:40 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:30:43.708 13:51:40 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:43.708 13:51:40 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:43.708 13:51:40 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:43.708 13:51:40 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:43.708 13:51:40 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:43.709 13:51:40 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:43.709 13:51:40 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:43.709 13:51:40 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:30:43.709 13:51:40 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:30:43.967 request: 00:30:43.967 { 00:30:43.967 "method": "env_dpdk_get_mem_stats", 00:30:43.967 "req_id": 1 00:30:43.967 } 00:30:43.967 Got JSON-RPC error response 00:30:43.967 response: 00:30:43.967 { 00:30:43.967 "code": -32601, 00:30:43.967 "message": "Method not found" 00:30:43.967 } 00:30:43.967 13:51:41 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:30:43.967 13:51:41 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:43.967 13:51:41 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:43.967 13:51:41 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:43.967 13:51:41 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59511 00:30:43.967 13:51:41 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59511 ']' 00:30:43.967 13:51:41 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59511 00:30:43.967 13:51:41 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:30:43.967 13:51:41 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:43.967 13:51:41 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59511 00:30:43.967 13:51:41 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:43.967 13:51:41 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:43.967 13:51:41 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59511' 00:30:43.967 killing process with pid 59511 00:30:43.967 13:51:41 app_cmdline -- common/autotest_common.sh@973 -- # kill 59511 00:30:43.967 13:51:41 app_cmdline -- common/autotest_common.sh@978 -- # wait 59511 00:30:44.225 00:30:44.225 real 0m2.049s 00:30:44.225 user 0m2.501s 00:30:44.225 sys 0m0.459s 00:30:44.225 13:51:41 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:44.225 13:51:41 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:30:44.225 ************************************ 00:30:44.225 END TEST app_cmdline 00:30:44.225 ************************************ 00:30:44.484 13:51:41 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:30:44.484 13:51:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:44.484 13:51:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:44.484 13:51:41 -- common/autotest_common.sh@10 -- # set +x 00:30:44.484 ************************************ 00:30:44.484 START TEST version 00:30:44.484 ************************************ 00:30:44.484 13:51:41 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:30:44.484 * Looking for test storage... 00:30:44.484 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:30:44.484 13:51:41 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:44.484 13:51:41 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:44.484 13:51:41 version -- common/autotest_common.sh@1693 -- # lcov --version 00:30:44.484 13:51:41 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:44.484 13:51:41 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:44.484 13:51:41 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:44.484 13:51:41 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:44.484 13:51:41 version -- scripts/common.sh@336 -- # IFS=.-: 00:30:44.484 13:51:41 version -- scripts/common.sh@336 -- # read -ra ver1 00:30:44.484 13:51:41 version -- scripts/common.sh@337 -- # IFS=.-: 00:30:44.484 13:51:41 version -- scripts/common.sh@337 -- # read -ra ver2 00:30:44.484 13:51:41 version -- scripts/common.sh@338 -- # local 'op=<' 00:30:44.484 13:51:41 version -- scripts/common.sh@340 -- # ver1_l=2 00:30:44.484 13:51:41 version -- scripts/common.sh@341 -- # ver2_l=1 00:30:44.484 13:51:41 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:44.484 13:51:41 version -- scripts/common.sh@344 -- # case "$op" in 00:30:44.484 13:51:41 version -- scripts/common.sh@345 -- # : 1 00:30:44.484 13:51:41 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:44.484 13:51:41 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:44.484 13:51:41 version -- scripts/common.sh@365 -- # decimal 1 00:30:44.484 13:51:41 version -- scripts/common.sh@353 -- # local d=1 00:30:44.484 13:51:41 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:44.484 13:51:41 version -- scripts/common.sh@355 -- # echo 1 00:30:44.484 13:51:41 version -- scripts/common.sh@365 -- # ver1[v]=1 00:30:44.484 13:51:41 version -- scripts/common.sh@366 -- # decimal 2 00:30:44.484 13:51:41 version -- scripts/common.sh@353 -- # local d=2 00:30:44.484 13:51:41 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:44.484 13:51:41 version -- scripts/common.sh@355 -- # echo 2 00:30:44.484 13:51:41 version -- scripts/common.sh@366 -- # ver2[v]=2 00:30:44.484 13:51:41 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:44.484 13:51:41 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:44.484 13:51:41 version -- scripts/common.sh@368 -- # return 0 00:30:44.485 13:51:41 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:44.485 13:51:41 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:44.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:44.485 --rc genhtml_branch_coverage=1 00:30:44.485 --rc genhtml_function_coverage=1 00:30:44.485 --rc genhtml_legend=1 00:30:44.485 --rc geninfo_all_blocks=1 00:30:44.485 --rc geninfo_unexecuted_blocks=1 00:30:44.485 00:30:44.485 ' 00:30:44.485 13:51:41 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:44.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:44.485 --rc genhtml_branch_coverage=1 00:30:44.485 --rc genhtml_function_coverage=1 00:30:44.485 --rc genhtml_legend=1 00:30:44.485 --rc geninfo_all_blocks=1 00:30:44.485 --rc geninfo_unexecuted_blocks=1 00:30:44.485 00:30:44.485 ' 00:30:44.485 13:51:41 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:44.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:44.485 --rc genhtml_branch_coverage=1 00:30:44.485 --rc genhtml_function_coverage=1 00:30:44.485 --rc genhtml_legend=1 00:30:44.485 --rc geninfo_all_blocks=1 00:30:44.485 --rc geninfo_unexecuted_blocks=1 00:30:44.485 00:30:44.485 ' 00:30:44.485 13:51:41 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:44.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:44.485 --rc genhtml_branch_coverage=1 00:30:44.485 --rc genhtml_function_coverage=1 00:30:44.485 --rc genhtml_legend=1 00:30:44.485 --rc geninfo_all_blocks=1 00:30:44.485 --rc geninfo_unexecuted_blocks=1 00:30:44.485 00:30:44.485 ' 00:30:44.485 13:51:41 version -- app/version.sh@17 -- # get_header_version major 00:30:44.485 13:51:41 version -- app/version.sh@14 -- # cut -f2 00:30:44.485 13:51:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:30:44.485 13:51:41 version -- app/version.sh@14 -- # tr -d '"' 00:30:44.485 13:51:41 version -- app/version.sh@17 -- # major=25 00:30:44.485 13:51:41 version -- app/version.sh@18 -- # get_header_version minor 00:30:44.485 13:51:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:30:44.485 13:51:41 version -- app/version.sh@14 -- # cut -f2 00:30:44.485 13:51:41 version -- app/version.sh@14 -- # tr -d '"' 00:30:44.485 13:51:41 version -- app/version.sh@18 -- # minor=1 00:30:44.485 13:51:41 version -- app/version.sh@19 -- # get_header_version patch 00:30:44.485 13:51:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:30:44.485 13:51:41 version -- app/version.sh@14 -- # tr -d '"' 00:30:44.485 13:51:41 version -- app/version.sh@14 -- # cut -f2 00:30:44.485 13:51:41 version -- app/version.sh@19 -- # patch=0 00:30:44.485 13:51:41 version -- app/version.sh@20 -- # get_header_version suffix 00:30:44.485 13:51:41 version -- app/version.sh@14 -- # cut -f2 00:30:44.485 13:51:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:30:44.485 13:51:41 version -- app/version.sh@14 -- # tr -d '"' 00:30:44.485 13:51:41 version -- app/version.sh@20 -- # suffix=-pre 00:30:44.485 13:51:41 version -- app/version.sh@22 -- # version=25.1 00:30:44.485 13:51:41 version -- app/version.sh@25 -- # (( patch != 0 )) 00:30:44.745 13:51:41 version -- app/version.sh@28 -- # version=25.1rc0 00:30:44.745 13:51:41 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:30:44.745 13:51:41 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:30:44.745 13:51:41 version -- app/version.sh@30 -- # py_version=25.1rc0 00:30:44.745 13:51:41 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:30:44.745 00:30:44.745 real 0m0.263s 00:30:44.745 user 0m0.168s 00:30:44.745 sys 0m0.137s 00:30:44.745 13:51:41 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:44.745 13:51:41 version -- common/autotest_common.sh@10 -- # set +x 00:30:44.745 ************************************ 00:30:44.745 END TEST version 00:30:44.745 ************************************ 00:30:44.745 13:51:41 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:30:44.745 13:51:41 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:30:44.745 13:51:41 -- spdk/autotest.sh@194 -- # uname -s 00:30:44.745 13:51:41 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:30:44.745 13:51:41 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:30:44.745 13:51:41 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:30:44.745 13:51:41 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:30:44.745 13:51:41 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:30:44.745 13:51:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:44.745 13:51:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:44.745 13:51:41 -- common/autotest_common.sh@10 -- # set +x 00:30:44.745 ************************************ 00:30:44.745 START TEST spdk_dd 00:30:44.745 ************************************ 00:30:44.745 13:51:41 spdk_dd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:30:44.745 * Looking for test storage... 00:30:44.745 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:30:44.745 13:51:41 spdk_dd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:44.745 13:51:41 spdk_dd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:44.745 13:51:41 spdk_dd -- common/autotest_common.sh@1693 -- # lcov --version 00:30:45.003 13:51:42 spdk_dd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:45.003 13:51:42 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:45.003 13:51:42 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:45.003 13:51:42 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:45.003 13:51:42 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:30:45.003 13:51:42 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:30:45.003 13:51:42 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:30:45.003 13:51:42 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:30:45.003 13:51:42 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:30:45.003 13:51:42 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:30:45.003 13:51:42 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:30:45.003 13:51:42 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:45.003 13:51:42 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:30:45.003 13:51:42 spdk_dd -- scripts/common.sh@345 -- # : 1 00:30:45.003 13:51:42 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:45.003 13:51:42 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:45.003 13:51:42 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:30:45.003 13:51:42 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:30:45.003 13:51:42 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:45.003 13:51:42 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:30:45.003 13:51:42 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:30:45.003 13:51:42 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:30:45.003 13:51:42 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:30:45.003 13:51:42 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:45.003 13:51:42 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:30:45.003 13:51:42 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:30:45.003 13:51:42 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:45.003 13:51:42 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:45.003 13:51:42 spdk_dd -- scripts/common.sh@368 -- # return 0 00:30:45.003 13:51:42 spdk_dd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:45.003 13:51:42 spdk_dd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:45.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.003 --rc genhtml_branch_coverage=1 00:30:45.003 --rc genhtml_function_coverage=1 00:30:45.003 --rc genhtml_legend=1 00:30:45.003 --rc geninfo_all_blocks=1 00:30:45.003 --rc geninfo_unexecuted_blocks=1 00:30:45.003 00:30:45.003 ' 00:30:45.003 13:51:42 spdk_dd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:45.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.003 --rc genhtml_branch_coverage=1 00:30:45.003 --rc genhtml_function_coverage=1 00:30:45.003 --rc genhtml_legend=1 00:30:45.003 --rc geninfo_all_blocks=1 00:30:45.003 --rc geninfo_unexecuted_blocks=1 00:30:45.003 00:30:45.003 ' 00:30:45.003 13:51:42 spdk_dd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:45.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.003 --rc genhtml_branch_coverage=1 00:30:45.003 --rc genhtml_function_coverage=1 00:30:45.004 --rc genhtml_legend=1 00:30:45.004 --rc geninfo_all_blocks=1 00:30:45.004 --rc geninfo_unexecuted_blocks=1 00:30:45.004 00:30:45.004 ' 00:30:45.004 13:51:42 spdk_dd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:45.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.004 --rc genhtml_branch_coverage=1 00:30:45.004 --rc genhtml_function_coverage=1 00:30:45.004 --rc genhtml_legend=1 00:30:45.004 --rc geninfo_all_blocks=1 00:30:45.004 --rc geninfo_unexecuted_blocks=1 00:30:45.004 00:30:45.004 ' 00:30:45.004 13:51:42 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:45.004 13:51:42 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:30:45.004 13:51:42 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:45.004 13:51:42 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:45.004 13:51:42 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:45.004 13:51:42 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.004 13:51:42 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.004 13:51:42 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.004 13:51:42 spdk_dd -- paths/export.sh@5 -- # export PATH 00:30:45.004 13:51:42 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.004 13:51:42 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:45.261 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:45.261 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:30:45.261 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:30:45.261 13:51:42 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:30:45.261 13:51:42 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:30:45.261 13:51:42 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:30:45.261 13:51:42 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:30:45.261 13:51:42 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:30:45.261 13:51:42 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:30:45.261 13:51:42 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:30:45.261 13:51:42 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:30:45.261 13:51:42 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:30:45.262 13:51:42 spdk_dd -- scripts/common.sh@233 -- # local class 00:30:45.262 13:51:42 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:30:45.262 13:51:42 spdk_dd -- scripts/common.sh@235 -- # local progif 00:30:45.262 13:51:42 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:30:45.262 13:51:42 spdk_dd -- scripts/common.sh@236 -- # class=01 00:30:45.262 13:51:42 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:30:45.262 13:51:42 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:30:45.262 13:51:42 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:30:45.262 13:51:42 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:30:45.262 13:51:42 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:30:45.262 13:51:42 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:30:45.262 13:51:42 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:30:45.262 13:51:42 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:30:45.262 13:51:42 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:30:45.262 13:51:42 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:30:45.262 13:51:42 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:30:45.262 13:51:42 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:30:45.262 13:51:42 spdk_dd -- scripts/common.sh@18 -- # local i 00:30:45.262 13:51:42 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:30:45.262 13:51:42 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:30:45.262 13:51:42 spdk_dd -- scripts/common.sh@27 -- # return 0 00:30:45.262 13:51:42 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:30:45.262 13:51:42 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:30:45.262 13:51:42 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:30:45.262 13:51:42 spdk_dd -- scripts/common.sh@18 -- # local i 00:30:45.262 13:51:42 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:30:45.262 13:51:42 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:30:45.262 13:51:42 spdk_dd -- scripts/common.sh@27 -- # return 0 00:30:45.262 13:51:42 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:30:45.262 13:51:42 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:30:45.262 13:51:42 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:30:45.262 13:51:42 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:30:45.262 13:51:42 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:30:45.262 13:51:42 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:30:45.262 13:51:42 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:30:45.262 13:51:42 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:30:45.262 13:51:42 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:30:45.262 13:51:42 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:30:45.262 13:51:42 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:30:45.262 13:51:42 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:30:45.262 13:51:42 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:30:45.262 13:51:42 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:30:45.262 13:51:42 spdk_dd -- dd/common.sh@139 -- # local lib 00:30:45.262 13:51:42 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:30:45.262 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.262 13:51:42 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:45.262 13:51:42 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:30:45.522 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:30:45.522 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.522 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:30:45.522 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.522 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.1 == liburing.so.* ]] 00:30:45.522 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.522 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:30:45.522 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.522 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:30:45.522 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.522 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:30:45.522 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.15.0 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.7.0 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.1 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.2.0 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.17.0 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.11.0 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.1 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.1 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.523 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:30:45.524 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.524 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:30:45.524 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.524 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:30:45.524 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.524 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:30:45.524 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.524 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:30:45.524 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.524 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:30:45.524 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.524 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:30:45.524 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.524 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:30:45.524 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.524 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:30:45.524 13:51:42 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:30:45.524 13:51:42 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:30:45.524 13:51:42 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:30:45.524 * spdk_dd linked to liburing 00:30:45.524 13:51:42 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:30:45.524 13:51:42 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@23 -- # CONFIG_CET=n 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=y 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@76 -- # CONFIG_FC=n 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:30:45.524 13:51:42 spdk_dd -- common/build_config.sh@90 -- # CONFIG_URING=y 00:30:45.524 13:51:42 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:30:45.524 13:51:42 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:30:45.524 13:51:42 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:30:45.524 13:51:42 spdk_dd -- dd/common.sh@153 -- # return 0 00:30:45.524 13:51:42 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:30:45.524 13:51:42 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:30:45.524 13:51:42 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:45.524 13:51:42 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:45.524 13:51:42 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:30:45.524 ************************************ 00:30:45.524 START TEST spdk_dd_basic_rw 00:30:45.524 ************************************ 00:30:45.524 13:51:42 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:30:45.524 * Looking for test storage... 00:30:45.524 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:30:45.524 13:51:42 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:45.524 13:51:42 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:45.524 13:51:42 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # lcov --version 00:30:45.524 13:51:42 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:45.524 13:51:42 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:45.524 13:51:42 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:45.524 13:51:42 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:45.524 13:51:42 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:30:45.524 13:51:42 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:30:45.524 13:51:42 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:30:45.524 13:51:42 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:30:45.524 13:51:42 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:30:45.524 13:51:42 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:30:45.524 13:51:42 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:30:45.524 13:51:42 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:45.525 13:51:42 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:30:45.525 13:51:42 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:30:45.525 13:51:42 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:45.525 13:51:42 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:45.525 13:51:42 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:30:45.525 13:51:42 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:30:45.525 13:51:42 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:45.525 13:51:42 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:30:45.525 13:51:42 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:30:45.525 13:51:42 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:30:45.525 13:51:42 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:30:45.525 13:51:42 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:45.525 13:51:42 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:30:45.525 13:51:42 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:30:45.525 13:51:42 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:45.525 13:51:42 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:45.525 13:51:42 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:30:45.525 13:51:42 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:45.525 13:51:42 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:45.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.525 --rc genhtml_branch_coverage=1 00:30:45.525 --rc genhtml_function_coverage=1 00:30:45.525 --rc genhtml_legend=1 00:30:45.525 --rc geninfo_all_blocks=1 00:30:45.525 --rc geninfo_unexecuted_blocks=1 00:30:45.525 00:30:45.525 ' 00:30:45.525 13:51:42 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:45.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.525 --rc genhtml_branch_coverage=1 00:30:45.525 --rc genhtml_function_coverage=1 00:30:45.525 --rc genhtml_legend=1 00:30:45.525 --rc geninfo_all_blocks=1 00:30:45.525 --rc geninfo_unexecuted_blocks=1 00:30:45.525 00:30:45.525 ' 00:30:45.525 13:51:42 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:45.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.525 --rc genhtml_branch_coverage=1 00:30:45.525 --rc genhtml_function_coverage=1 00:30:45.525 --rc genhtml_legend=1 00:30:45.525 --rc geninfo_all_blocks=1 00:30:45.525 --rc geninfo_unexecuted_blocks=1 00:30:45.525 00:30:45.525 ' 00:30:45.525 13:51:42 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:45.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.525 --rc genhtml_branch_coverage=1 00:30:45.525 --rc genhtml_function_coverage=1 00:30:45.525 --rc genhtml_legend=1 00:30:45.525 --rc geninfo_all_blocks=1 00:30:45.525 --rc geninfo_unexecuted_blocks=1 00:30:45.525 00:30:45.525 ' 00:30:45.525 13:51:42 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:45.525 13:51:42 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:30:45.525 13:51:42 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:45.525 13:51:42 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:45.525 13:51:42 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:45.525 13:51:42 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.525 13:51:42 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.525 13:51:42 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.525 13:51:42 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:30:45.525 13:51:42 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.525 13:51:42 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:30:45.525 13:51:42 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:30:45.525 13:51:42 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:30:45.525 13:51:42 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:30:45.525 13:51:42 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:30:45.525 13:51:42 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:30:45.525 13:51:42 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:30:45.525 13:51:42 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:30:45.525 13:51:42 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:30:45.525 13:51:42 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:30:45.525 13:51:42 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:30:45.525 13:51:42 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:30:45.525 13:51:42 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:30:45.786 13:51:42 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:30:45.786 13:51:42 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:30:45.787 13:51:42 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:30:45.787 13:51:42 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:30:45.787 13:51:42 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:30:45.787 13:51:42 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:30:45.787 13:51:42 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:30:45.787 13:51:42 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:30:45.787 13:51:42 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:30:45.787 13:51:42 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:45.787 13:51:42 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:30:45.787 13:51:42 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:30:45.787 13:51:42 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:30:45.787 13:51:42 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:30:45.787 ************************************ 00:30:45.787 START TEST dd_bs_lt_native_bs 00:30:45.787 ************************************ 00:30:45.787 13:51:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1129 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:30:45.787 13:51:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # local es=0 00:30:45.787 13:51:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:30:45.787 13:51:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:45.787 13:51:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:45.787 13:51:43 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:45.787 13:51:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:45.787 13:51:43 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:45.787 13:51:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:45.787 13:51:43 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:45.787 13:51:43 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:30:45.787 13:51:43 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:30:45.787 { 00:30:45.787 "subsystems": [ 00:30:45.787 { 00:30:45.787 "subsystem": "bdev", 00:30:45.787 "config": [ 00:30:45.787 { 00:30:45.787 "params": { 00:30:45.787 "trtype": "pcie", 00:30:45.787 "traddr": "0000:00:10.0", 00:30:45.787 "name": "Nvme0" 00:30:45.787 }, 00:30:45.787 "method": "bdev_nvme_attach_controller" 00:30:45.787 }, 00:30:45.787 { 00:30:45.787 "method": "bdev_wait_for_examine" 00:30:45.787 } 00:30:45.787 ] 00:30:45.787 } 00:30:45.787 ] 00:30:45.787 } 00:30:45.788 [2024-11-20 13:51:43.058020] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:30:45.788 [2024-11-20 13:51:43.058094] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59861 ] 00:30:46.046 [2024-11-20 13:51:43.209630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:46.046 [2024-11-20 13:51:43.268821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:46.046 [2024-11-20 13:51:43.324864] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:30:46.305 [2024-11-20 13:51:43.437986] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:30:46.305 [2024-11-20 13:51:43.438152] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:46.305 [2024-11-20 13:51:43.544823] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:30:46.305 ************************************ 00:30:46.305 END TEST dd_bs_lt_native_bs 00:30:46.305 ************************************ 00:30:46.305 13:51:43 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # es=234 00:30:46.305 13:51:43 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:46.305 13:51:43 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@664 -- # es=106 00:30:46.305 13:51:43 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@665 -- # case "$es" in 00:30:46.305 13:51:43 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@672 -- # es=1 00:30:46.305 13:51:43 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:46.305 00:30:46.305 real 0m0.608s 00:30:46.305 user 0m0.394s 00:30:46.305 sys 0m0.156s 00:30:46.305 13:51:43 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:46.305 13:51:43 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:30:46.563 13:51:43 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:30:46.563 13:51:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:46.563 13:51:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:46.563 13:51:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:30:46.563 ************************************ 00:30:46.564 START TEST dd_rw 00:30:46.564 ************************************ 00:30:46.564 13:51:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1129 -- # basic_rw 4096 00:30:46.564 13:51:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:30:46.564 13:51:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:30:46.564 13:51:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:30:46.564 13:51:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:30:46.564 13:51:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:30:46.564 13:51:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:30:46.564 13:51:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:30:46.564 13:51:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:30:46.564 13:51:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:30:46.564 13:51:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:30:46.564 13:51:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:30:46.564 13:51:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:30:46.564 13:51:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:30:46.564 13:51:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:30:46.564 13:51:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:30:46.564 13:51:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:30:46.564 13:51:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:30:46.564 13:51:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:30:46.822 13:51:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:30:46.822 13:51:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:30:46.822 13:51:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:30:46.822 13:51:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:30:47.080 [2024-11-20 13:51:44.179311] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:30:47.080 [2024-11-20 13:51:44.179472] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59893 ] 00:30:47.080 { 00:30:47.080 "subsystems": [ 00:30:47.080 { 00:30:47.080 "subsystem": "bdev", 00:30:47.080 "config": [ 00:30:47.080 { 00:30:47.080 "params": { 00:30:47.080 "trtype": "pcie", 00:30:47.080 "traddr": "0000:00:10.0", 00:30:47.080 "name": "Nvme0" 00:30:47.080 }, 00:30:47.080 "method": "bdev_nvme_attach_controller" 00:30:47.080 }, 00:30:47.080 { 00:30:47.080 "method": "bdev_wait_for_examine" 00:30:47.080 } 00:30:47.080 ] 00:30:47.080 } 00:30:47.080 ] 00:30:47.080 } 00:30:47.080 [2024-11-20 13:51:44.376592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:47.339 [2024-11-20 13:51:44.434237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:47.339 [2024-11-20 13:51:44.477569] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:30:47.339  [2024-11-20T13:51:44.922Z] Copying: 60/60 [kB] (average 19 MBps) 00:30:47.599 00:30:47.599 13:51:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:30:47.599 13:51:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:30:47.599 13:51:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:30:47.599 13:51:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:30:47.599 { 00:30:47.599 "subsystems": [ 00:30:47.599 { 00:30:47.599 "subsystem": "bdev", 00:30:47.599 "config": [ 00:30:47.599 { 00:30:47.599 "params": { 00:30:47.599 "trtype": "pcie", 00:30:47.599 "traddr": "0000:00:10.0", 00:30:47.599 "name": "Nvme0" 00:30:47.599 }, 00:30:47.599 "method": "bdev_nvme_attach_controller" 00:30:47.599 }, 00:30:47.599 { 00:30:47.599 "method": "bdev_wait_for_examine" 00:30:47.599 } 00:30:47.599 ] 00:30:47.599 } 00:30:47.599 ] 00:30:47.599 } 00:30:47.599 [2024-11-20 13:51:44.805056] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:30:47.599 [2024-11-20 13:51:44.805134] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59907 ] 00:30:47.872 [2024-11-20 13:51:44.953503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:47.872 [2024-11-20 13:51:45.010741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:47.872 [2024-11-20 13:51:45.053836] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:30:47.872  [2024-11-20T13:51:45.479Z] Copying: 60/60 [kB] (average 19 MBps) 00:30:48.156 00:30:48.156 13:51:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:30:48.156 13:51:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:30:48.156 13:51:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:30:48.156 13:51:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:30:48.156 13:51:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:30:48.156 13:51:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:30:48.156 13:51:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:30:48.156 13:51:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:30:48.156 13:51:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:30:48.156 13:51:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:30:48.156 13:51:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:30:48.156 { 00:30:48.156 "subsystems": [ 00:30:48.156 { 00:30:48.156 "subsystem": "bdev", 00:30:48.156 "config": [ 00:30:48.156 { 00:30:48.156 "params": { 00:30:48.156 "trtype": "pcie", 00:30:48.156 "traddr": "0000:00:10.0", 00:30:48.156 "name": "Nvme0" 00:30:48.156 }, 00:30:48.156 "method": "bdev_nvme_attach_controller" 00:30:48.156 }, 00:30:48.156 { 00:30:48.156 "method": "bdev_wait_for_examine" 00:30:48.156 } 00:30:48.156 ] 00:30:48.156 } 00:30:48.156 ] 00:30:48.156 } 00:30:48.156 [2024-11-20 13:51:45.392962] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:30:48.156 [2024-11-20 13:51:45.393086] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59922 ] 00:30:48.415 [2024-11-20 13:51:45.535576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:48.415 [2024-11-20 13:51:45.604332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:48.415 [2024-11-20 13:51:45.649001] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:30:48.674  [2024-11-20T13:51:45.997Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:30:48.674 00:30:48.674 13:51:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:30:48.674 13:51:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:30:48.674 13:51:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:30:48.674 13:51:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:30:48.674 13:51:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:30:48.674 13:51:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:30:48.675 13:51:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:30:49.243 13:51:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:30:49.243 13:51:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:30:49.243 13:51:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:30:49.243 13:51:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:30:49.243 { 00:30:49.243 "subsystems": [ 00:30:49.243 { 00:30:49.243 "subsystem": "bdev", 00:30:49.243 "config": [ 00:30:49.243 { 00:30:49.243 "params": { 00:30:49.243 "trtype": "pcie", 00:30:49.243 "traddr": "0000:00:10.0", 00:30:49.243 "name": "Nvme0" 00:30:49.243 }, 00:30:49.243 "method": "bdev_nvme_attach_controller" 00:30:49.243 }, 00:30:49.243 { 00:30:49.243 "method": "bdev_wait_for_examine" 00:30:49.243 } 00:30:49.243 ] 00:30:49.243 } 00:30:49.243 ] 00:30:49.243 } 00:30:49.243 [2024-11-20 13:51:46.477723] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:30:49.243 [2024-11-20 13:51:46.477789] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59941 ] 00:30:49.503 [2024-11-20 13:51:46.626781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:49.503 [2024-11-20 13:51:46.682952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:49.503 [2024-11-20 13:51:46.726100] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:30:49.761  [2024-11-20T13:51:47.085Z] Copying: 60/60 [kB] (average 58 MBps) 00:30:49.762 00:30:49.762 13:51:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:30:49.762 13:51:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:30:49.762 13:51:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:30:49.762 13:51:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:30:49.762 { 00:30:49.762 "subsystems": [ 00:30:49.762 { 00:30:49.762 "subsystem": "bdev", 00:30:49.762 "config": [ 00:30:49.762 { 00:30:49.762 "params": { 00:30:49.762 "trtype": "pcie", 00:30:49.762 "traddr": "0000:00:10.0", 00:30:49.762 "name": "Nvme0" 00:30:49.762 }, 00:30:49.762 "method": "bdev_nvme_attach_controller" 00:30:49.762 }, 00:30:49.762 { 00:30:49.762 "method": "bdev_wait_for_examine" 00:30:49.762 } 00:30:49.762 ] 00:30:49.762 } 00:30:49.762 ] 00:30:49.762 } 00:30:49.762 [2024-11-20 13:51:47.055212] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:30:49.762 [2024-11-20 13:51:47.055284] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59955 ] 00:30:50.021 [2024-11-20 13:51:47.188567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:50.021 [2024-11-20 13:51:47.244757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:50.021 [2024-11-20 13:51:47.288610] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:30:50.280  [2024-11-20T13:51:47.603Z] Copying: 60/60 [kB] (average 58 MBps) 00:30:50.280 00:30:50.280 13:51:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:30:50.280 13:51:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:30:50.280 13:51:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:30:50.280 13:51:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:30:50.280 13:51:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:30:50.280 13:51:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:30:50.280 13:51:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:30:50.280 13:51:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:30:50.280 13:51:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:30:50.280 13:51:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:30:50.280 13:51:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:30:50.540 [2024-11-20 13:51:47.630244] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:30:50.540 [2024-11-20 13:51:47.630333] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59970 ] 00:30:50.540 { 00:30:50.540 "subsystems": [ 00:30:50.540 { 00:30:50.540 "subsystem": "bdev", 00:30:50.540 "config": [ 00:30:50.540 { 00:30:50.540 "params": { 00:30:50.540 "trtype": "pcie", 00:30:50.540 "traddr": "0000:00:10.0", 00:30:50.540 "name": "Nvme0" 00:30:50.540 }, 00:30:50.540 "method": "bdev_nvme_attach_controller" 00:30:50.540 }, 00:30:50.540 { 00:30:50.540 "method": "bdev_wait_for_examine" 00:30:50.540 } 00:30:50.540 ] 00:30:50.540 } 00:30:50.540 ] 00:30:50.540 } 00:30:50.540 [2024-11-20 13:51:47.782442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:50.540 [2024-11-20 13:51:47.843694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:50.800 [2024-11-20 13:51:47.889016] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:30:50.800  [2024-11-20T13:51:48.381Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:30:51.058 00:30:51.058 13:51:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:30:51.058 13:51:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:30:51.058 13:51:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:30:51.058 13:51:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:30:51.058 13:51:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:30:51.058 13:51:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:30:51.058 13:51:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:30:51.058 13:51:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:30:51.316 13:51:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:30:51.316 13:51:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:30:51.316 13:51:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:30:51.316 13:51:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:30:51.580 [2024-11-20 13:51:48.678255] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:30:51.580 [2024-11-20 13:51:48.678431] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59989 ] 00:30:51.580 { 00:30:51.580 "subsystems": [ 00:30:51.580 { 00:30:51.580 "subsystem": "bdev", 00:30:51.580 "config": [ 00:30:51.580 { 00:30:51.580 "params": { 00:30:51.580 "trtype": "pcie", 00:30:51.580 "traddr": "0000:00:10.0", 00:30:51.580 "name": "Nvme0" 00:30:51.580 }, 00:30:51.580 "method": "bdev_nvme_attach_controller" 00:30:51.580 }, 00:30:51.580 { 00:30:51.580 "method": "bdev_wait_for_examine" 00:30:51.580 } 00:30:51.580 ] 00:30:51.580 } 00:30:51.580 ] 00:30:51.580 } 00:30:51.580 [2024-11-20 13:51:48.827141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:51.580 [2024-11-20 13:51:48.884588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:51.839 [2024-11-20 13:51:48.928994] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:30:51.839  [2024-11-20T13:51:49.420Z] Copying: 56/56 [kB] (average 54 MBps) 00:30:52.097 00:30:52.097 13:51:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:30:52.097 13:51:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:30:52.097 13:51:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:30:52.097 13:51:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:30:52.097 { 00:30:52.097 "subsystems": [ 00:30:52.097 { 00:30:52.097 "subsystem": "bdev", 00:30:52.097 "config": [ 00:30:52.097 { 00:30:52.097 "params": { 00:30:52.097 "trtype": "pcie", 00:30:52.097 "traddr": "0000:00:10.0", 00:30:52.097 "name": "Nvme0" 00:30:52.097 }, 00:30:52.097 "method": "bdev_nvme_attach_controller" 00:30:52.097 }, 00:30:52.097 { 00:30:52.097 "method": "bdev_wait_for_examine" 00:30:52.097 } 00:30:52.097 ] 00:30:52.097 } 00:30:52.097 ] 00:30:52.097 } 00:30:52.097 [2024-11-20 13:51:49.254108] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:30:52.097 [2024-11-20 13:51:49.254189] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60003 ] 00:30:52.097 [2024-11-20 13:51:49.403407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:52.355 [2024-11-20 13:51:49.458622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:52.355 [2024-11-20 13:51:49.502539] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:30:52.355  [2024-11-20T13:51:49.937Z] Copying: 56/56 [kB] (average 27 MBps) 00:30:52.614 00:30:52.614 13:51:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:30:52.614 13:51:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:30:52.614 13:51:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:30:52.614 13:51:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:30:52.614 13:51:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:30:52.614 13:51:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:30:52.614 13:51:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:30:52.614 13:51:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:30:52.614 13:51:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:30:52.614 13:51:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:30:52.614 13:51:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:30:52.614 [2024-11-20 13:51:49.834294] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:30:52.614 [2024-11-20 13:51:49.834459] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60024 ] 00:30:52.614 { 00:30:52.614 "subsystems": [ 00:30:52.614 { 00:30:52.614 "subsystem": "bdev", 00:30:52.614 "config": [ 00:30:52.614 { 00:30:52.614 "params": { 00:30:52.614 "trtype": "pcie", 00:30:52.614 "traddr": "0000:00:10.0", 00:30:52.614 "name": "Nvme0" 00:30:52.614 }, 00:30:52.614 "method": "bdev_nvme_attach_controller" 00:30:52.614 }, 00:30:52.614 { 00:30:52.614 "method": "bdev_wait_for_examine" 00:30:52.614 } 00:30:52.614 ] 00:30:52.614 } 00:30:52.614 ] 00:30:52.614 } 00:30:52.873 [2024-11-20 13:51:49.984195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:52.873 [2024-11-20 13:51:50.038142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:52.873 [2024-11-20 13:51:50.081372] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:30:52.873  [2024-11-20T13:51:50.455Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:30:53.132 00:30:53.132 13:51:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:30:53.132 13:51:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:30:53.132 13:51:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:30:53.132 13:51:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:30:53.132 13:51:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:30:53.132 13:51:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:30:53.132 13:51:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:30:53.699 13:51:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:30:53.699 13:51:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:30:53.699 13:51:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:30:53.699 13:51:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:30:53.699 { 00:30:53.699 "subsystems": [ 00:30:53.699 { 00:30:53.699 "subsystem": "bdev", 00:30:53.700 "config": [ 00:30:53.700 { 00:30:53.700 "params": { 00:30:53.700 "trtype": "pcie", 00:30:53.700 "traddr": "0000:00:10.0", 00:30:53.700 "name": "Nvme0" 00:30:53.700 }, 00:30:53.700 "method": "bdev_nvme_attach_controller" 00:30:53.700 }, 00:30:53.700 { 00:30:53.700 "method": "bdev_wait_for_examine" 00:30:53.700 } 00:30:53.700 ] 00:30:53.700 } 00:30:53.700 ] 00:30:53.700 } 00:30:53.700 [2024-11-20 13:51:50.848594] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:30:53.700 [2024-11-20 13:51:50.848720] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60043 ] 00:30:53.700 [2024-11-20 13:51:50.994740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:53.959 [2024-11-20 13:51:51.048876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:53.959 [2024-11-20 13:51:51.091529] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:30:53.959  [2024-11-20T13:51:51.540Z] Copying: 56/56 [kB] (average 54 MBps) 00:30:54.217 00:30:54.217 13:51:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:30:54.217 13:51:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:30:54.217 13:51:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:30:54.217 13:51:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:30:54.217 { 00:30:54.217 "subsystems": [ 00:30:54.217 { 00:30:54.217 "subsystem": "bdev", 00:30:54.217 "config": [ 00:30:54.217 { 00:30:54.217 "params": { 00:30:54.217 "trtype": "pcie", 00:30:54.217 "traddr": "0000:00:10.0", 00:30:54.217 "name": "Nvme0" 00:30:54.217 }, 00:30:54.217 "method": "bdev_nvme_attach_controller" 00:30:54.217 }, 00:30:54.217 { 00:30:54.217 "method": "bdev_wait_for_examine" 00:30:54.217 } 00:30:54.217 ] 00:30:54.217 } 00:30:54.217 ] 00:30:54.217 } 00:30:54.217 [2024-11-20 13:51:51.410548] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:30:54.217 [2024-11-20 13:51:51.410673] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60051 ] 00:30:54.476 [2024-11-20 13:51:51.559696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:54.476 [2024-11-20 13:51:51.613774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:54.476 [2024-11-20 13:51:51.657034] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:30:54.476  [2024-11-20T13:51:52.058Z] Copying: 56/56 [kB] (average 54 MBps) 00:30:54.735 00:30:54.735 13:51:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:30:54.735 13:51:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:30:54.735 13:51:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:30:54.735 13:51:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:30:54.735 13:51:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:30:54.735 13:51:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:30:54.735 13:51:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:30:54.735 13:51:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:30:54.735 13:51:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:30:54.735 13:51:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:30:54.735 13:51:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:30:54.735 { 00:30:54.735 "subsystems": [ 00:30:54.735 { 00:30:54.735 "subsystem": "bdev", 00:30:54.735 "config": [ 00:30:54.735 { 00:30:54.735 "params": { 00:30:54.735 "trtype": "pcie", 00:30:54.735 "traddr": "0000:00:10.0", 00:30:54.735 "name": "Nvme0" 00:30:54.735 }, 00:30:54.735 "method": "bdev_nvme_attach_controller" 00:30:54.735 }, 00:30:54.735 { 00:30:54.735 "method": "bdev_wait_for_examine" 00:30:54.735 } 00:30:54.735 ] 00:30:54.735 } 00:30:54.735 ] 00:30:54.735 } 00:30:54.735 [2024-11-20 13:51:51.987366] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:30:54.735 [2024-11-20 13:51:51.987494] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60072 ] 00:30:54.995 [2024-11-20 13:51:52.137370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:54.995 [2024-11-20 13:51:52.193045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:54.995 [2024-11-20 13:51:52.235987] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:30:55.254  [2024-11-20T13:51:52.577Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:30:55.254 00:30:55.254 13:51:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:30:55.254 13:51:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:30:55.254 13:51:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:30:55.254 13:51:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:30:55.254 13:51:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:30:55.254 13:51:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:30:55.254 13:51:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:30:55.254 13:51:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:30:55.881 13:51:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:30:55.881 13:51:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:30:55.881 13:51:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:30:55.881 13:51:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:30:55.881 { 00:30:55.881 "subsystems": [ 00:30:55.881 { 00:30:55.881 "subsystem": "bdev", 00:30:55.881 "config": [ 00:30:55.881 { 00:30:55.881 "params": { 00:30:55.881 "trtype": "pcie", 00:30:55.881 "traddr": "0000:00:10.0", 00:30:55.881 "name": "Nvme0" 00:30:55.881 }, 00:30:55.881 "method": "bdev_nvme_attach_controller" 00:30:55.881 }, 00:30:55.881 { 00:30:55.881 "method": "bdev_wait_for_examine" 00:30:55.881 } 00:30:55.881 ] 00:30:55.881 } 00:30:55.881 ] 00:30:55.881 } 00:30:55.881 [2024-11-20 13:51:52.942178] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:30:55.881 [2024-11-20 13:51:52.942321] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60091 ] 00:30:55.881 [2024-11-20 13:51:53.091156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:55.881 [2024-11-20 13:51:53.149529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:55.881 [2024-11-20 13:51:53.193862] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:30:56.141  [2024-11-20T13:51:53.464Z] Copying: 48/48 [kB] (average 46 MBps) 00:30:56.141 00:30:56.401 13:51:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:30:56.401 13:51:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:30:56.401 13:51:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:30:56.401 13:51:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:30:56.401 [2024-11-20 13:51:53.519602] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:30:56.401 [2024-11-20 13:51:53.519739] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60101 ] 00:30:56.401 { 00:30:56.401 "subsystems": [ 00:30:56.401 { 00:30:56.401 "subsystem": "bdev", 00:30:56.401 "config": [ 00:30:56.401 { 00:30:56.401 "params": { 00:30:56.401 "trtype": "pcie", 00:30:56.401 "traddr": "0000:00:10.0", 00:30:56.401 "name": "Nvme0" 00:30:56.401 }, 00:30:56.401 "method": "bdev_nvme_attach_controller" 00:30:56.401 }, 00:30:56.401 { 00:30:56.401 "method": "bdev_wait_for_examine" 00:30:56.401 } 00:30:56.401 ] 00:30:56.401 } 00:30:56.401 ] 00:30:56.401 } 00:30:56.401 [2024-11-20 13:51:53.669001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:56.661 [2024-11-20 13:51:53.723667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:56.661 [2024-11-20 13:51:53.767868] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:30:56.661  [2024-11-20T13:51:54.243Z] Copying: 48/48 [kB] (average 46 MBps) 00:30:56.920 00:30:56.920 13:51:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:30:56.920 13:51:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:30:56.920 13:51:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:30:56.920 13:51:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:30:56.920 13:51:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:30:56.920 13:51:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:30:56.920 13:51:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:30:56.920 13:51:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:30:56.920 13:51:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:30:56.920 13:51:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:30:56.920 13:51:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:30:56.920 [2024-11-20 13:51:54.106292] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:30:56.920 [2024-11-20 13:51:54.106450] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60122 ] 00:30:56.920 { 00:30:56.920 "subsystems": [ 00:30:56.920 { 00:30:56.920 "subsystem": "bdev", 00:30:56.920 "config": [ 00:30:56.920 { 00:30:56.920 "params": { 00:30:56.920 "trtype": "pcie", 00:30:56.920 "traddr": "0000:00:10.0", 00:30:56.920 "name": "Nvme0" 00:30:56.920 }, 00:30:56.920 "method": "bdev_nvme_attach_controller" 00:30:56.920 }, 00:30:56.920 { 00:30:56.920 "method": "bdev_wait_for_examine" 00:30:56.920 } 00:30:56.920 ] 00:30:56.920 } 00:30:56.920 ] 00:30:56.920 } 00:30:57.180 [2024-11-20 13:51:54.258044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:57.180 [2024-11-20 13:51:54.316303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:57.180 [2024-11-20 13:51:54.360692] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:30:57.180  [2024-11-20T13:51:54.763Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:30:57.440 00:30:57.440 13:51:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:30:57.440 13:51:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:30:57.440 13:51:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:30:57.440 13:51:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:30:57.440 13:51:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:30:57.440 13:51:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:30:57.440 13:51:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:30:58.009 13:51:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:30:58.009 13:51:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:30:58.009 13:51:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:30:58.009 13:51:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:30:58.009 [2024-11-20 13:51:55.107122] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:30:58.009 [2024-11-20 13:51:55.107283] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60141 ] 00:30:58.009 { 00:30:58.009 "subsystems": [ 00:30:58.009 { 00:30:58.009 "subsystem": "bdev", 00:30:58.009 "config": [ 00:30:58.009 { 00:30:58.009 "params": { 00:30:58.009 "trtype": "pcie", 00:30:58.009 "traddr": "0000:00:10.0", 00:30:58.009 "name": "Nvme0" 00:30:58.009 }, 00:30:58.009 "method": "bdev_nvme_attach_controller" 00:30:58.009 }, 00:30:58.009 { 00:30:58.009 "method": "bdev_wait_for_examine" 00:30:58.009 } 00:30:58.009 ] 00:30:58.009 } 00:30:58.009 ] 00:30:58.009 } 00:30:58.009 [2024-11-20 13:51:55.257168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:58.009 [2024-11-20 13:51:55.315408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:58.269 [2024-11-20 13:51:55.359768] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:30:58.269  [2024-11-20T13:51:55.851Z] Copying: 48/48 [kB] (average 46 MBps) 00:30:58.528 00:30:58.528 13:51:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:30:58.528 13:51:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:30:58.528 13:51:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:30:58.528 13:51:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:30:58.528 [2024-11-20 13:51:55.688634] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:30:58.529 [2024-11-20 13:51:55.688803] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60153 ] 00:30:58.529 { 00:30:58.529 "subsystems": [ 00:30:58.529 { 00:30:58.529 "subsystem": "bdev", 00:30:58.529 "config": [ 00:30:58.529 { 00:30:58.529 "params": { 00:30:58.529 "trtype": "pcie", 00:30:58.529 "traddr": "0000:00:10.0", 00:30:58.529 "name": "Nvme0" 00:30:58.529 }, 00:30:58.529 "method": "bdev_nvme_attach_controller" 00:30:58.529 }, 00:30:58.529 { 00:30:58.529 "method": "bdev_wait_for_examine" 00:30:58.529 } 00:30:58.529 ] 00:30:58.529 } 00:30:58.529 ] 00:30:58.529 } 00:30:58.529 [2024-11-20 13:51:55.836481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:58.788 [2024-11-20 13:51:55.894495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:58.788 [2024-11-20 13:51:55.938456] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:30:58.788  [2024-11-20T13:51:56.371Z] Copying: 48/48 [kB] (average 46 MBps) 00:30:59.048 00:30:59.048 13:51:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:30:59.048 13:51:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:30:59.048 13:51:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:30:59.048 13:51:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:30:59.048 13:51:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:30:59.048 13:51:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:30:59.048 13:51:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:30:59.048 13:51:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:30:59.048 13:51:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:30:59.048 13:51:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:30:59.048 13:51:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:30:59.048 [2024-11-20 13:51:56.276311] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:30:59.048 [2024-11-20 13:51:56.276387] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60170 ] 00:30:59.048 { 00:30:59.048 "subsystems": [ 00:30:59.048 { 00:30:59.048 "subsystem": "bdev", 00:30:59.048 "config": [ 00:30:59.048 { 00:30:59.048 "params": { 00:30:59.048 "trtype": "pcie", 00:30:59.048 "traddr": "0000:00:10.0", 00:30:59.048 "name": "Nvme0" 00:30:59.048 }, 00:30:59.048 "method": "bdev_nvme_attach_controller" 00:30:59.048 }, 00:30:59.048 { 00:30:59.048 "method": "bdev_wait_for_examine" 00:30:59.048 } 00:30:59.048 ] 00:30:59.048 } 00:30:59.048 ] 00:30:59.048 } 00:30:59.307 [2024-11-20 13:51:56.417813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:59.307 [2024-11-20 13:51:56.475466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:59.307 [2024-11-20 13:51:56.519358] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:30:59.307  [2024-11-20T13:51:56.889Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:30:59.566 00:30:59.566 00:30:59.566 real 0m13.162s 00:30:59.566 user 0m9.615s 00:30:59.566 sys 0m4.699s 00:30:59.566 13:51:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:59.566 ************************************ 00:30:59.566 END TEST dd_rw 00:30:59.566 ************************************ 00:30:59.566 13:51:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:30:59.566 13:51:56 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:30:59.566 13:51:56 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:59.566 13:51:56 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:59.566 13:51:56 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:30:59.566 ************************************ 00:30:59.566 START TEST dd_rw_offset 00:30:59.566 ************************************ 00:30:59.566 13:51:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1129 -- # basic_offset 00:30:59.566 13:51:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:30:59.566 13:51:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:30:59.566 13:51:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:30:59.566 13:51:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:30:59.828 13:51:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:30:59.829 13:51:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=ppxfym1df8oqd542ozdmleaijw91hadzp5zzq9y2055dj0a61fgoxrpir4jumfucg886dy0d4z6kw4mh25ovlj25kt0ucht2ajiur140f7v67cghihko814csdpezxywgcogmfl1ugd4lhxj6baxh4bfqfo855tbfuik78aloxvii3k7dvboze69f23pt58u1mceh4qszl9iv3kc1ufo2ch8v64u03ca34h226r81h5rlp8gyg2eoi6acvupi5ddn7llnj0vdp15megnyxz40owbu70ismu4kms1lh480b6qeuv9qvpnkxv5oqub0s8g41vbtza4rytjpnl48nubwbf7vrgj7kat8cg8io1e2zkpqpaxlbcmigdtrgyzbqsr2qpclqvdxzjxay58nrd2b1rx80av7ah2v1o8onlewvmxjsx3if90khw7076fkgb5unmbjy8kqqxja6v4zcdoswxj61w37d0ocem99xg4tzplehcwzcsjz6ep1ml7ahox4wvzonp0zdvu21ig2tgwtrxal729k6k5rauki04lztr6xqpqwflghxohedxyqdgx8w7lb4e0zxpo2fjt5rn7a2l2mo8f3cnajgckqn7jpc8em5em2y39a24dw5sqtmkmuogwool9zesxgwmkz3yga0qcg6n7mf0q3e6j3nnh12nvnfqmx2etfcz7illntia4vxanz11j61l5yfdi8w8uk6d64pp6dnaoq0nyh3023iupvskepovedzrq67kp61tghyekwde5crat4x06xj6u6nic6ea1aunhdtf7szguksqrgyse9emz7xv4enk6ywousd41bbhwz6842lgbqkj1rg9gq88ts32mc0j9av8ignwa1zpb8ao0k2z5ih2j3ts5fposmos6gmrllyqt0usxg0klkz2yg346kp3lgl7qsjf3nhfr0gw8mfz3lja0f3w9cgim37kgsov5srp1hiz15h57lx6pwy6lr8cysfau7ljbuo4arxosu305ga3fu07cnqaucdcu4ahc5pbzxw6dg7kgyuxagikq3r4nmo4jairbeyhbu8j1u946fsaynm36epis9ivw4ncp81pr366my32cc1lgat5ydhj6dj7cicu2pnzey5bmt6x6y05xveb2382dagw6s8fpuy8cs9s2itxe61xdmyf0em8no0h6co63no994keu8qxxx3xvhx75nsc3d03bgm1h2t1lxp3q3r5crd4vei3bihcz85qvhltata2sr0w9h4rb25brvv57vrr30idy5dxdaxfl58ra6qk43kiv84237cz4a9mtxgyreb5n43polgpog1t4ylm3xwnti675mo9j3e7yzk17atytm7q2tpt6aelcqsl83z2mt47poruxer3dqhz6i70t1ap9ppt5refeiqhxt4mpa50l2pktbqghszfd0siab4exwjp8b0kha7iswk1mdcyn0nr9ui6s5aikrcj4zswn400aid9lnf03am78bx4oif384niq75vijbzoes1ccg9d804c82971etibxwr37nm2bcdj7pd91ggug3pmujxlwe32lmr4iwb6iesye35msmgocz5j9pszdidc491qs961w1ip4hz5nwrsidva5t0zzm78mxhmte1w0grpf3fp5yuxayyuvoucx14bgwu45vh3l6mmvvnfs3c36xi8khy6392zjbq9s8lqfohj7z1nku0rd29uxrvy3624cdwpiugb1a5azr4hwck81l8ybzfe2l5tevqzms16cpw6w6knmdif9vjymr5xz27sc7xd5r062z2ugue1fehsjyfzdchvyn4fb4totg9qetfzlk3jgkw7gsdpua745qssn8fxx8ckld37m56q31jgov3shnfs26y877t39y2xd3nr2xsu1cfa51mbwng52tsvtsqx5strqkkcewc3bodi6r5a6zsukyd8vq3zxqqypwioh9qe0mdpq7kqkqwj1nffsi0rpkqe2nebqtssep62vfbj8cntz5lej8yux9kl2hn8pq5uh0e1dg6yx4vh629icrbzn3w3wmacw1sprlji9iacgqxga4ywsy4c2ibnay94t1r48g0rvgiqrthau0zdd0t98wcilfkpmg25tnrebluq5kikkjlc115l117sude7azz4q7zgd1z8h545enyim5hxa04dzt1sj3f2ibhgx2rteemn2h69nf6a4ko07011rxn8j4s0no1jzd76l9quogkmdciv7vlqsej9bluux8nnq4srthasbwszko6ryp8oi9hr3kz53ufbkvkdyi2pv9nmionls8dejt1i3soskfc1ja961ly7ghuagax6hdzdm89j5ujm2b50jpwtd7kh2ufkkr3h9zjb8v9azlayhosn63jo7ilejojyoxt8yth1thav7wklhwq5alzaegman16enu06nis0xqvmtqa6y75vjqivyma80ddi1xquuwqiy1brwrgz12gp52g528biv46vok82vtjc1jc0bux71lh0mrcdvzu16z06kgwonowinf7b4pgnhrbhj06jtbdvxn4hfu4rbgubpvyk3yjtpnfiiy4hxgqr0zcz9l1pa56coo6lqjqnq8ztf3x8c093glo3k39fyl3td5arqr0kyqp070ufvrrpeipld2qew26007qvxnrqhydpmvf4mpi2vofs3lwhxbz5aawrng21fk4i1cvsdouzzb6o568a2our6y98lwdbp1l01ck2m9fzy5rqmsv8j70atv1eqfs6cl7aekqlg7u0rvc1jvkle6ymwce49yy7vd5l741dxpafwquljiwc2p17p5dfuylhid9dbm4fdpnyvl9cvww4a1b80jr8lhvxslvzgyntsveh0f2d3q5qd5i765xjy8fxzm0hbrpzhg0mirhzr5dr4noa1ze80m4tit13ylqoccyywr7ia8lgenqebj1bb30aa0s3cy9xr0hxqpf0iys6wyt7yk9pju56n482ickae693zunh9iwho5z1kp9m2i6lq5gpd475792ra74kwh2ogy5wly99l11obhkqy5ji4329daouryurx4o0hzo2hmgiju3jd6nyeddpinydcsjvn4gwrghatq9pv2560iopfhuydsdl5im1swtoe4duerh64lh2ycd4j16p5lmvszg6x10c592t7q6of8o2qy14m5oydhzdune6ii27xoery90o9ih4cn4vwtpz2o5k977ht5e46t4hz1ee45kynp7wu40rhhqr14qivdzwmockwy7dcj1r221s0mbbk5tvhehq2b8njbwnrvyh8xbdx031ep7x7f0lfucf9124yx5l4d9wdviimvjtp0axk6h91229modykiaqnatfrmuw39uyc3apzn6kcva95u2ilr2jy0i0c6w8idauwvhs7htrw170yle0jlhc2fkav917vfimfj4sf6q5fzywsr35xabh1n6g4fozw9xg322w926czdlnohrbtxb65qqq29nxmvw9319ovatsct8wq8gpqswc5o444x1bz67m0sq2cqhdcmt3hdkad8ec136ui6xbexdh29rv41py9inpo192ubyctytlopg88bxap7dh3iw00a9ixvhkufgbhe2pyr95merpko7povmls4ngzdir44x6meovfle6sldafswc9itiszhtbuwf0ilswgv2m8t7tnoz81r4ce9kegdou9pdjyznjpiey0rmjn3wxxzjz507l1lla9xlghx7rysqxaxwsp6olcx0udclddclt3wxk56ua0i35avq320e4xhpi767ikuhjev15gyzskoa58i7trgp6j5mjw58goc3yte2idf70ead6cgadb7ov8yu65m33ongz8h40lhxzq4mfulqhsjxrasyjo5j43fhlnofr5ss9uw7uzkppnq7devi72dzwjikb54g92c57tjc1iv1w5ok1t138k4mkme724hxw01e8nj8pp06iwlzmgyw0uh1myreijf02k3k68r1otwnx0nfegd0tppc13ig9uhm8df4jzlfoyoh4xy9t7xr10zs6usjp3u203nsyej 00:30:59.829 13:51:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:30:59.829 13:51:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:30:59.829 13:51:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:30:59.829 13:51:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:30:59.829 [2024-11-20 13:51:56.967918] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:30:59.829 [2024-11-20 13:51:56.968068] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60206 ] 00:30:59.829 { 00:30:59.829 "subsystems": [ 00:30:59.829 { 00:30:59.829 "subsystem": "bdev", 00:30:59.829 "config": [ 00:30:59.829 { 00:30:59.829 "params": { 00:30:59.829 "trtype": "pcie", 00:30:59.829 "traddr": "0000:00:10.0", 00:30:59.829 "name": "Nvme0" 00:30:59.829 }, 00:30:59.829 "method": "bdev_nvme_attach_controller" 00:30:59.829 }, 00:30:59.829 { 00:30:59.829 "method": "bdev_wait_for_examine" 00:30:59.829 } 00:30:59.829 ] 00:30:59.829 } 00:30:59.829 ] 00:30:59.829 } 00:30:59.829 [2024-11-20 13:51:57.116397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:00.089 [2024-11-20 13:51:57.173958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:00.089 [2024-11-20 13:51:57.217026] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:00.090  [2024-11-20T13:51:57.673Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:31:00.350 00:31:00.350 13:51:57 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:31:00.350 13:51:57 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:31:00.350 13:51:57 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:31:00.350 13:51:57 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:31:00.350 [2024-11-20 13:51:57.546572] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:31:00.350 [2024-11-20 13:51:57.546766] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60214 ] 00:31:00.350 { 00:31:00.350 "subsystems": [ 00:31:00.350 { 00:31:00.350 "subsystem": "bdev", 00:31:00.350 "config": [ 00:31:00.350 { 00:31:00.350 "params": { 00:31:00.350 "trtype": "pcie", 00:31:00.350 "traddr": "0000:00:10.0", 00:31:00.350 "name": "Nvme0" 00:31:00.350 }, 00:31:00.350 "method": "bdev_nvme_attach_controller" 00:31:00.350 }, 00:31:00.350 { 00:31:00.350 "method": "bdev_wait_for_examine" 00:31:00.350 } 00:31:00.350 ] 00:31:00.350 } 00:31:00.350 ] 00:31:00.350 } 00:31:00.609 [2024-11-20 13:51:57.695922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:00.609 [2024-11-20 13:51:57.753078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:00.609 [2024-11-20 13:51:57.811686] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:00.609  [2024-11-20T13:51:58.193Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:31:00.870 00:31:00.870 13:51:58 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:31:00.871 13:51:58 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ ppxfym1df8oqd542ozdmleaijw91hadzp5zzq9y2055dj0a61fgoxrpir4jumfucg886dy0d4z6kw4mh25ovlj25kt0ucht2ajiur140f7v67cghihko814csdpezxywgcogmfl1ugd4lhxj6baxh4bfqfo855tbfuik78aloxvii3k7dvboze69f23pt58u1mceh4qszl9iv3kc1ufo2ch8v64u03ca34h226r81h5rlp8gyg2eoi6acvupi5ddn7llnj0vdp15megnyxz40owbu70ismu4kms1lh480b6qeuv9qvpnkxv5oqub0s8g41vbtza4rytjpnl48nubwbf7vrgj7kat8cg8io1e2zkpqpaxlbcmigdtrgyzbqsr2qpclqvdxzjxay58nrd2b1rx80av7ah2v1o8onlewvmxjsx3if90khw7076fkgb5unmbjy8kqqxja6v4zcdoswxj61w37d0ocem99xg4tzplehcwzcsjz6ep1ml7ahox4wvzonp0zdvu21ig2tgwtrxal729k6k5rauki04lztr6xqpqwflghxohedxyqdgx8w7lb4e0zxpo2fjt5rn7a2l2mo8f3cnajgckqn7jpc8em5em2y39a24dw5sqtmkmuogwool9zesxgwmkz3yga0qcg6n7mf0q3e6j3nnh12nvnfqmx2etfcz7illntia4vxanz11j61l5yfdi8w8uk6d64pp6dnaoq0nyh3023iupvskepovedzrq67kp61tghyekwde5crat4x06xj6u6nic6ea1aunhdtf7szguksqrgyse9emz7xv4enk6ywousd41bbhwz6842lgbqkj1rg9gq88ts32mc0j9av8ignwa1zpb8ao0k2z5ih2j3ts5fposmos6gmrllyqt0usxg0klkz2yg346kp3lgl7qsjf3nhfr0gw8mfz3lja0f3w9cgim37kgsov5srp1hiz15h57lx6pwy6lr8cysfau7ljbuo4arxosu305ga3fu07cnqaucdcu4ahc5pbzxw6dg7kgyuxagikq3r4nmo4jairbeyhbu8j1u946fsaynm36epis9ivw4ncp81pr366my32cc1lgat5ydhj6dj7cicu2pnzey5bmt6x6y05xveb2382dagw6s8fpuy8cs9s2itxe61xdmyf0em8no0h6co63no994keu8qxxx3xvhx75nsc3d03bgm1h2t1lxp3q3r5crd4vei3bihcz85qvhltata2sr0w9h4rb25brvv57vrr30idy5dxdaxfl58ra6qk43kiv84237cz4a9mtxgyreb5n43polgpog1t4ylm3xwnti675mo9j3e7yzk17atytm7q2tpt6aelcqsl83z2mt47poruxer3dqhz6i70t1ap9ppt5refeiqhxt4mpa50l2pktbqghszfd0siab4exwjp8b0kha7iswk1mdcyn0nr9ui6s5aikrcj4zswn400aid9lnf03am78bx4oif384niq75vijbzoes1ccg9d804c82971etibxwr37nm2bcdj7pd91ggug3pmujxlwe32lmr4iwb6iesye35msmgocz5j9pszdidc491qs961w1ip4hz5nwrsidva5t0zzm78mxhmte1w0grpf3fp5yuxayyuvoucx14bgwu45vh3l6mmvvnfs3c36xi8khy6392zjbq9s8lqfohj7z1nku0rd29uxrvy3624cdwpiugb1a5azr4hwck81l8ybzfe2l5tevqzms16cpw6w6knmdif9vjymr5xz27sc7xd5r062z2ugue1fehsjyfzdchvyn4fb4totg9qetfzlk3jgkw7gsdpua745qssn8fxx8ckld37m56q31jgov3shnfs26y877t39y2xd3nr2xsu1cfa51mbwng52tsvtsqx5strqkkcewc3bodi6r5a6zsukyd8vq3zxqqypwioh9qe0mdpq7kqkqwj1nffsi0rpkqe2nebqtssep62vfbj8cntz5lej8yux9kl2hn8pq5uh0e1dg6yx4vh629icrbzn3w3wmacw1sprlji9iacgqxga4ywsy4c2ibnay94t1r48g0rvgiqrthau0zdd0t98wcilfkpmg25tnrebluq5kikkjlc115l117sude7azz4q7zgd1z8h545enyim5hxa04dzt1sj3f2ibhgx2rteemn2h69nf6a4ko07011rxn8j4s0no1jzd76l9quogkmdciv7vlqsej9bluux8nnq4srthasbwszko6ryp8oi9hr3kz53ufbkvkdyi2pv9nmionls8dejt1i3soskfc1ja961ly7ghuagax6hdzdm89j5ujm2b50jpwtd7kh2ufkkr3h9zjb8v9azlayhosn63jo7ilejojyoxt8yth1thav7wklhwq5alzaegman16enu06nis0xqvmtqa6y75vjqivyma80ddi1xquuwqiy1brwrgz12gp52g528biv46vok82vtjc1jc0bux71lh0mrcdvzu16z06kgwonowinf7b4pgnhrbhj06jtbdvxn4hfu4rbgubpvyk3yjtpnfiiy4hxgqr0zcz9l1pa56coo6lqjqnq8ztf3x8c093glo3k39fyl3td5arqr0kyqp070ufvrrpeipld2qew26007qvxnrqhydpmvf4mpi2vofs3lwhxbz5aawrng21fk4i1cvsdouzzb6o568a2our6y98lwdbp1l01ck2m9fzy5rqmsv8j70atv1eqfs6cl7aekqlg7u0rvc1jvkle6ymwce49yy7vd5l741dxpafwquljiwc2p17p5dfuylhid9dbm4fdpnyvl9cvww4a1b80jr8lhvxslvzgyntsveh0f2d3q5qd5i765xjy8fxzm0hbrpzhg0mirhzr5dr4noa1ze80m4tit13ylqoccyywr7ia8lgenqebj1bb30aa0s3cy9xr0hxqpf0iys6wyt7yk9pju56n482ickae693zunh9iwho5z1kp9m2i6lq5gpd475792ra74kwh2ogy5wly99l11obhkqy5ji4329daouryurx4o0hzo2hmgiju3jd6nyeddpinydcsjvn4gwrghatq9pv2560iopfhuydsdl5im1swtoe4duerh64lh2ycd4j16p5lmvszg6x10c592t7q6of8o2qy14m5oydhzdune6ii27xoery90o9ih4cn4vwtpz2o5k977ht5e46t4hz1ee45kynp7wu40rhhqr14qivdzwmockwy7dcj1r221s0mbbk5tvhehq2b8njbwnrvyh8xbdx031ep7x7f0lfucf9124yx5l4d9wdviimvjtp0axk6h91229modykiaqnatfrmuw39uyc3apzn6kcva95u2ilr2jy0i0c6w8idauwvhs7htrw170yle0jlhc2fkav917vfimfj4sf6q5fzywsr35xabh1n6g4fozw9xg322w926czdlnohrbtxb65qqq29nxmvw9319ovatsct8wq8gpqswc5o444x1bz67m0sq2cqhdcmt3hdkad8ec136ui6xbexdh29rv41py9inpo192ubyctytlopg88bxap7dh3iw00a9ixvhkufgbhe2pyr95merpko7povmls4ngzdir44x6meovfle6sldafswc9itiszhtbuwf0ilswgv2m8t7tnoz81r4ce9kegdou9pdjyznjpiey0rmjn3wxxzjz507l1lla9xlghx7rysqxaxwsp6olcx0udclddclt3wxk56ua0i35avq320e4xhpi767ikuhjev15gyzskoa58i7trgp6j5mjw58goc3yte2idf70ead6cgadb7ov8yu65m33ongz8h40lhxzq4mfulqhsjxrasyjo5j43fhlnofr5ss9uw7uzkppnq7devi72dzwjikb54g92c57tjc1iv1w5ok1t138k4mkme724hxw01e8nj8pp06iwlzmgyw0uh1myreijf02k3k68r1otwnx0nfegd0tppc13ig9uhm8df4jzlfoyoh4xy9t7xr10zs6usjp3u203nsyej == \p\p\x\f\y\m\1\d\f\8\o\q\d\5\4\2\o\z\d\m\l\e\a\i\j\w\9\1\h\a\d\z\p\5\z\z\q\9\y\2\0\5\5\d\j\0\a\6\1\f\g\o\x\r\p\i\r\4\j\u\m\f\u\c\g\8\8\6\d\y\0\d\4\z\6\k\w\4\m\h\2\5\o\v\l\j\2\5\k\t\0\u\c\h\t\2\a\j\i\u\r\1\4\0\f\7\v\6\7\c\g\h\i\h\k\o\8\1\4\c\s\d\p\e\z\x\y\w\g\c\o\g\m\f\l\1\u\g\d\4\l\h\x\j\6\b\a\x\h\4\b\f\q\f\o\8\5\5\t\b\f\u\i\k\7\8\a\l\o\x\v\i\i\3\k\7\d\v\b\o\z\e\6\9\f\2\3\p\t\5\8\u\1\m\c\e\h\4\q\s\z\l\9\i\v\3\k\c\1\u\f\o\2\c\h\8\v\6\4\u\0\3\c\a\3\4\h\2\2\6\r\8\1\h\5\r\l\p\8\g\y\g\2\e\o\i\6\a\c\v\u\p\i\5\d\d\n\7\l\l\n\j\0\v\d\p\1\5\m\e\g\n\y\x\z\4\0\o\w\b\u\7\0\i\s\m\u\4\k\m\s\1\l\h\4\8\0\b\6\q\e\u\v\9\q\v\p\n\k\x\v\5\o\q\u\b\0\s\8\g\4\1\v\b\t\z\a\4\r\y\t\j\p\n\l\4\8\n\u\b\w\b\f\7\v\r\g\j\7\k\a\t\8\c\g\8\i\o\1\e\2\z\k\p\q\p\a\x\l\b\c\m\i\g\d\t\r\g\y\z\b\q\s\r\2\q\p\c\l\q\v\d\x\z\j\x\a\y\5\8\n\r\d\2\b\1\r\x\8\0\a\v\7\a\h\2\v\1\o\8\o\n\l\e\w\v\m\x\j\s\x\3\i\f\9\0\k\h\w\7\0\7\6\f\k\g\b\5\u\n\m\b\j\y\8\k\q\q\x\j\a\6\v\4\z\c\d\o\s\w\x\j\6\1\w\3\7\d\0\o\c\e\m\9\9\x\g\4\t\z\p\l\e\h\c\w\z\c\s\j\z\6\e\p\1\m\l\7\a\h\o\x\4\w\v\z\o\n\p\0\z\d\v\u\2\1\i\g\2\t\g\w\t\r\x\a\l\7\2\9\k\6\k\5\r\a\u\k\i\0\4\l\z\t\r\6\x\q\p\q\w\f\l\g\h\x\o\h\e\d\x\y\q\d\g\x\8\w\7\l\b\4\e\0\z\x\p\o\2\f\j\t\5\r\n\7\a\2\l\2\m\o\8\f\3\c\n\a\j\g\c\k\q\n\7\j\p\c\8\e\m\5\e\m\2\y\3\9\a\2\4\d\w\5\s\q\t\m\k\m\u\o\g\w\o\o\l\9\z\e\s\x\g\w\m\k\z\3\y\g\a\0\q\c\g\6\n\7\m\f\0\q\3\e\6\j\3\n\n\h\1\2\n\v\n\f\q\m\x\2\e\t\f\c\z\7\i\l\l\n\t\i\a\4\v\x\a\n\z\1\1\j\6\1\l\5\y\f\d\i\8\w\8\u\k\6\d\6\4\p\p\6\d\n\a\o\q\0\n\y\h\3\0\2\3\i\u\p\v\s\k\e\p\o\v\e\d\z\r\q\6\7\k\p\6\1\t\g\h\y\e\k\w\d\e\5\c\r\a\t\4\x\0\6\x\j\6\u\6\n\i\c\6\e\a\1\a\u\n\h\d\t\f\7\s\z\g\u\k\s\q\r\g\y\s\e\9\e\m\z\7\x\v\4\e\n\k\6\y\w\o\u\s\d\4\1\b\b\h\w\z\6\8\4\2\l\g\b\q\k\j\1\r\g\9\g\q\8\8\t\s\3\2\m\c\0\j\9\a\v\8\i\g\n\w\a\1\z\p\b\8\a\o\0\k\2\z\5\i\h\2\j\3\t\s\5\f\p\o\s\m\o\s\6\g\m\r\l\l\y\q\t\0\u\s\x\g\0\k\l\k\z\2\y\g\3\4\6\k\p\3\l\g\l\7\q\s\j\f\3\n\h\f\r\0\g\w\8\m\f\z\3\l\j\a\0\f\3\w\9\c\g\i\m\3\7\k\g\s\o\v\5\s\r\p\1\h\i\z\1\5\h\5\7\l\x\6\p\w\y\6\l\r\8\c\y\s\f\a\u\7\l\j\b\u\o\4\a\r\x\o\s\u\3\0\5\g\a\3\f\u\0\7\c\n\q\a\u\c\d\c\u\4\a\h\c\5\p\b\z\x\w\6\d\g\7\k\g\y\u\x\a\g\i\k\q\3\r\4\n\m\o\4\j\a\i\r\b\e\y\h\b\u\8\j\1\u\9\4\6\f\s\a\y\n\m\3\6\e\p\i\s\9\i\v\w\4\n\c\p\8\1\p\r\3\6\6\m\y\3\2\c\c\1\l\g\a\t\5\y\d\h\j\6\d\j\7\c\i\c\u\2\p\n\z\e\y\5\b\m\t\6\x\6\y\0\5\x\v\e\b\2\3\8\2\d\a\g\w\6\s\8\f\p\u\y\8\c\s\9\s\2\i\t\x\e\6\1\x\d\m\y\f\0\e\m\8\n\o\0\h\6\c\o\6\3\n\o\9\9\4\k\e\u\8\q\x\x\x\3\x\v\h\x\7\5\n\s\c\3\d\0\3\b\g\m\1\h\2\t\1\l\x\p\3\q\3\r\5\c\r\d\4\v\e\i\3\b\i\h\c\z\8\5\q\v\h\l\t\a\t\a\2\s\r\0\w\9\h\4\r\b\2\5\b\r\v\v\5\7\v\r\r\3\0\i\d\y\5\d\x\d\a\x\f\l\5\8\r\a\6\q\k\4\3\k\i\v\8\4\2\3\7\c\z\4\a\9\m\t\x\g\y\r\e\b\5\n\4\3\p\o\l\g\p\o\g\1\t\4\y\l\m\3\x\w\n\t\i\6\7\5\m\o\9\j\3\e\7\y\z\k\1\7\a\t\y\t\m\7\q\2\t\p\t\6\a\e\l\c\q\s\l\8\3\z\2\m\t\4\7\p\o\r\u\x\e\r\3\d\q\h\z\6\i\7\0\t\1\a\p\9\p\p\t\5\r\e\f\e\i\q\h\x\t\4\m\p\a\5\0\l\2\p\k\t\b\q\g\h\s\z\f\d\0\s\i\a\b\4\e\x\w\j\p\8\b\0\k\h\a\7\i\s\w\k\1\m\d\c\y\n\0\n\r\9\u\i\6\s\5\a\i\k\r\c\j\4\z\s\w\n\4\0\0\a\i\d\9\l\n\f\0\3\a\m\7\8\b\x\4\o\i\f\3\8\4\n\i\q\7\5\v\i\j\b\z\o\e\s\1\c\c\g\9\d\8\0\4\c\8\2\9\7\1\e\t\i\b\x\w\r\3\7\n\m\2\b\c\d\j\7\p\d\9\1\g\g\u\g\3\p\m\u\j\x\l\w\e\3\2\l\m\r\4\i\w\b\6\i\e\s\y\e\3\5\m\s\m\g\o\c\z\5\j\9\p\s\z\d\i\d\c\4\9\1\q\s\9\6\1\w\1\i\p\4\h\z\5\n\w\r\s\i\d\v\a\5\t\0\z\z\m\7\8\m\x\h\m\t\e\1\w\0\g\r\p\f\3\f\p\5\y\u\x\a\y\y\u\v\o\u\c\x\1\4\b\g\w\u\4\5\v\h\3\l\6\m\m\v\v\n\f\s\3\c\3\6\x\i\8\k\h\y\6\3\9\2\z\j\b\q\9\s\8\l\q\f\o\h\j\7\z\1\n\k\u\0\r\d\2\9\u\x\r\v\y\3\6\2\4\c\d\w\p\i\u\g\b\1\a\5\a\z\r\4\h\w\c\k\8\1\l\8\y\b\z\f\e\2\l\5\t\e\v\q\z\m\s\1\6\c\p\w\6\w\6\k\n\m\d\i\f\9\v\j\y\m\r\5\x\z\2\7\s\c\7\x\d\5\r\0\6\2\z\2\u\g\u\e\1\f\e\h\s\j\y\f\z\d\c\h\v\y\n\4\f\b\4\t\o\t\g\9\q\e\t\f\z\l\k\3\j\g\k\w\7\g\s\d\p\u\a\7\4\5\q\s\s\n\8\f\x\x\8\c\k\l\d\3\7\m\5\6\q\3\1\j\g\o\v\3\s\h\n\f\s\2\6\y\8\7\7\t\3\9\y\2\x\d\3\n\r\2\x\s\u\1\c\f\a\5\1\m\b\w\n\g\5\2\t\s\v\t\s\q\x\5\s\t\r\q\k\k\c\e\w\c\3\b\o\d\i\6\r\5\a\6\z\s\u\k\y\d\8\v\q\3\z\x\q\q\y\p\w\i\o\h\9\q\e\0\m\d\p\q\7\k\q\k\q\w\j\1\n\f\f\s\i\0\r\p\k\q\e\2\n\e\b\q\t\s\s\e\p\6\2\v\f\b\j\8\c\n\t\z\5\l\e\j\8\y\u\x\9\k\l\2\h\n\8\p\q\5\u\h\0\e\1\d\g\6\y\x\4\v\h\6\2\9\i\c\r\b\z\n\3\w\3\w\m\a\c\w\1\s\p\r\l\j\i\9\i\a\c\g\q\x\g\a\4\y\w\s\y\4\c\2\i\b\n\a\y\9\4\t\1\r\4\8\g\0\r\v\g\i\q\r\t\h\a\u\0\z\d\d\0\t\9\8\w\c\i\l\f\k\p\m\g\2\5\t\n\r\e\b\l\u\q\5\k\i\k\k\j\l\c\1\1\5\l\1\1\7\s\u\d\e\7\a\z\z\4\q\7\z\g\d\1\z\8\h\5\4\5\e\n\y\i\m\5\h\x\a\0\4\d\z\t\1\s\j\3\f\2\i\b\h\g\x\2\r\t\e\e\m\n\2\h\6\9\n\f\6\a\4\k\o\0\7\0\1\1\r\x\n\8\j\4\s\0\n\o\1\j\z\d\7\6\l\9\q\u\o\g\k\m\d\c\i\v\7\v\l\q\s\e\j\9\b\l\u\u\x\8\n\n\q\4\s\r\t\h\a\s\b\w\s\z\k\o\6\r\y\p\8\o\i\9\h\r\3\k\z\5\3\u\f\b\k\v\k\d\y\i\2\p\v\9\n\m\i\o\n\l\s\8\d\e\j\t\1\i\3\s\o\s\k\f\c\1\j\a\9\6\1\l\y\7\g\h\u\a\g\a\x\6\h\d\z\d\m\8\9\j\5\u\j\m\2\b\5\0\j\p\w\t\d\7\k\h\2\u\f\k\k\r\3\h\9\z\j\b\8\v\9\a\z\l\a\y\h\o\s\n\6\3\j\o\7\i\l\e\j\o\j\y\o\x\t\8\y\t\h\1\t\h\a\v\7\w\k\l\h\w\q\5\a\l\z\a\e\g\m\a\n\1\6\e\n\u\0\6\n\i\s\0\x\q\v\m\t\q\a\6\y\7\5\v\j\q\i\v\y\m\a\8\0\d\d\i\1\x\q\u\u\w\q\i\y\1\b\r\w\r\g\z\1\2\g\p\5\2\g\5\2\8\b\i\v\4\6\v\o\k\8\2\v\t\j\c\1\j\c\0\b\u\x\7\1\l\h\0\m\r\c\d\v\z\u\1\6\z\0\6\k\g\w\o\n\o\w\i\n\f\7\b\4\p\g\n\h\r\b\h\j\0\6\j\t\b\d\v\x\n\4\h\f\u\4\r\b\g\u\b\p\v\y\k\3\y\j\t\p\n\f\i\i\y\4\h\x\g\q\r\0\z\c\z\9\l\1\p\a\5\6\c\o\o\6\l\q\j\q\n\q\8\z\t\f\3\x\8\c\0\9\3\g\l\o\3\k\3\9\f\y\l\3\t\d\5\a\r\q\r\0\k\y\q\p\0\7\0\u\f\v\r\r\p\e\i\p\l\d\2\q\e\w\2\6\0\0\7\q\v\x\n\r\q\h\y\d\p\m\v\f\4\m\p\i\2\v\o\f\s\3\l\w\h\x\b\z\5\a\a\w\r\n\g\2\1\f\k\4\i\1\c\v\s\d\o\u\z\z\b\6\o\5\6\8\a\2\o\u\r\6\y\9\8\l\w\d\b\p\1\l\0\1\c\k\2\m\9\f\z\y\5\r\q\m\s\v\8\j\7\0\a\t\v\1\e\q\f\s\6\c\l\7\a\e\k\q\l\g\7\u\0\r\v\c\1\j\v\k\l\e\6\y\m\w\c\e\4\9\y\y\7\v\d\5\l\7\4\1\d\x\p\a\f\w\q\u\l\j\i\w\c\2\p\1\7\p\5\d\f\u\y\l\h\i\d\9\d\b\m\4\f\d\p\n\y\v\l\9\c\v\w\w\4\a\1\b\8\0\j\r\8\l\h\v\x\s\l\v\z\g\y\n\t\s\v\e\h\0\f\2\d\3\q\5\q\d\5\i\7\6\5\x\j\y\8\f\x\z\m\0\h\b\r\p\z\h\g\0\m\i\r\h\z\r\5\d\r\4\n\o\a\1\z\e\8\0\m\4\t\i\t\1\3\y\l\q\o\c\c\y\y\w\r\7\i\a\8\l\g\e\n\q\e\b\j\1\b\b\3\0\a\a\0\s\3\c\y\9\x\r\0\h\x\q\p\f\0\i\y\s\6\w\y\t\7\y\k\9\p\j\u\5\6\n\4\8\2\i\c\k\a\e\6\9\3\z\u\n\h\9\i\w\h\o\5\z\1\k\p\9\m\2\i\6\l\q\5\g\p\d\4\7\5\7\9\2\r\a\7\4\k\w\h\2\o\g\y\5\w\l\y\9\9\l\1\1\o\b\h\k\q\y\5\j\i\4\3\2\9\d\a\o\u\r\y\u\r\x\4\o\0\h\z\o\2\h\m\g\i\j\u\3\j\d\6\n\y\e\d\d\p\i\n\y\d\c\s\j\v\n\4\g\w\r\g\h\a\t\q\9\p\v\2\5\6\0\i\o\p\f\h\u\y\d\s\d\l\5\i\m\1\s\w\t\o\e\4\d\u\e\r\h\6\4\l\h\2\y\c\d\4\j\1\6\p\5\l\m\v\s\z\g\6\x\1\0\c\5\9\2\t\7\q\6\o\f\8\o\2\q\y\1\4\m\5\o\y\d\h\z\d\u\n\e\6\i\i\2\7\x\o\e\r\y\9\0\o\9\i\h\4\c\n\4\v\w\t\p\z\2\o\5\k\9\7\7\h\t\5\e\4\6\t\4\h\z\1\e\e\4\5\k\y\n\p\7\w\u\4\0\r\h\h\q\r\1\4\q\i\v\d\z\w\m\o\c\k\w\y\7\d\c\j\1\r\2\2\1\s\0\m\b\b\k\5\t\v\h\e\h\q\2\b\8\n\j\b\w\n\r\v\y\h\8\x\b\d\x\0\3\1\e\p\7\x\7\f\0\l\f\u\c\f\9\1\2\4\y\x\5\l\4\d\9\w\d\v\i\i\m\v\j\t\p\0\a\x\k\6\h\9\1\2\2\9\m\o\d\y\k\i\a\q\n\a\t\f\r\m\u\w\3\9\u\y\c\3\a\p\z\n\6\k\c\v\a\9\5\u\2\i\l\r\2\j\y\0\i\0\c\6\w\8\i\d\a\u\w\v\h\s\7\h\t\r\w\1\7\0\y\l\e\0\j\l\h\c\2\f\k\a\v\9\1\7\v\f\i\m\f\j\4\s\f\6\q\5\f\z\y\w\s\r\3\5\x\a\b\h\1\n\6\g\4\f\o\z\w\9\x\g\3\2\2\w\9\2\6\c\z\d\l\n\o\h\r\b\t\x\b\6\5\q\q\q\2\9\n\x\m\v\w\9\3\1\9\o\v\a\t\s\c\t\8\w\q\8\g\p\q\s\w\c\5\o\4\4\4\x\1\b\z\6\7\m\0\s\q\2\c\q\h\d\c\m\t\3\h\d\k\a\d\8\e\c\1\3\6\u\i\6\x\b\e\x\d\h\2\9\r\v\4\1\p\y\9\i\n\p\o\1\9\2\u\b\y\c\t\y\t\l\o\p\g\8\8\b\x\a\p\7\d\h\3\i\w\0\0\a\9\i\x\v\h\k\u\f\g\b\h\e\2\p\y\r\9\5\m\e\r\p\k\o\7\p\o\v\m\l\s\4\n\g\z\d\i\r\4\4\x\6\m\e\o\v\f\l\e\6\s\l\d\a\f\s\w\c\9\i\t\i\s\z\h\t\b\u\w\f\0\i\l\s\w\g\v\2\m\8\t\7\t\n\o\z\8\1\r\4\c\e\9\k\e\g\d\o\u\9\p\d\j\y\z\n\j\p\i\e\y\0\r\m\j\n\3\w\x\x\z\j\z\5\0\7\l\1\l\l\a\9\x\l\g\h\x\7\r\y\s\q\x\a\x\w\s\p\6\o\l\c\x\0\u\d\c\l\d\d\c\l\t\3\w\x\k\5\6\u\a\0\i\3\5\a\v\q\3\2\0\e\4\x\h\p\i\7\6\7\i\k\u\h\j\e\v\1\5\g\y\z\s\k\o\a\5\8\i\7\t\r\g\p\6\j\5\m\j\w\5\8\g\o\c\3\y\t\e\2\i\d\f\7\0\e\a\d\6\c\g\a\d\b\7\o\v\8\y\u\6\5\m\3\3\o\n\g\z\8\h\4\0\l\h\x\z\q\4\m\f\u\l\q\h\s\j\x\r\a\s\y\j\o\5\j\4\3\f\h\l\n\o\f\r\5\s\s\9\u\w\7\u\z\k\p\p\n\q\7\d\e\v\i\7\2\d\z\w\j\i\k\b\5\4\g\9\2\c\5\7\t\j\c\1\i\v\1\w\5\o\k\1\t\1\3\8\k\4\m\k\m\e\7\2\4\h\x\w\0\1\e\8\n\j\8\p\p\0\6\i\w\l\z\m\g\y\w\0\u\h\1\m\y\r\e\i\j\f\0\2\k\3\k\6\8\r\1\o\t\w\n\x\0\n\f\e\g\d\0\t\p\p\c\1\3\i\g\9\u\h\m\8\d\f\4\j\z\l\f\o\y\o\h\4\x\y\9\t\7\x\r\1\0\z\s\6\u\s\j\p\3\u\2\0\3\n\s\y\e\j ]] 00:31:00.871 00:31:00.871 real 0m1.222s 00:31:00.871 user 0m0.845s 00:31:00.871 sys 0m0.520s 00:31:00.871 13:51:58 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:00.871 13:51:58 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:31:00.871 ************************************ 00:31:00.871 END TEST dd_rw_offset 00:31:00.871 ************************************ 00:31:00.871 13:51:58 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:31:00.871 13:51:58 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:31:00.871 13:51:58 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:31:00.871 13:51:58 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:31:00.871 13:51:58 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:31:00.871 13:51:58 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:31:00.871 13:51:58 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:31:00.871 13:51:58 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:31:00.871 13:51:58 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:31:00.871 13:51:58 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:31:00.871 13:51:58 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:31:01.130 [2024-11-20 13:51:58.204048] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:31:01.130 [2024-11-20 13:51:58.204118] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60249 ] 00:31:01.130 { 00:31:01.130 "subsystems": [ 00:31:01.130 { 00:31:01.130 "subsystem": "bdev", 00:31:01.130 "config": [ 00:31:01.130 { 00:31:01.130 "params": { 00:31:01.130 "trtype": "pcie", 00:31:01.130 "traddr": "0000:00:10.0", 00:31:01.130 "name": "Nvme0" 00:31:01.130 }, 00:31:01.130 "method": "bdev_nvme_attach_controller" 00:31:01.130 }, 00:31:01.130 { 00:31:01.130 "method": "bdev_wait_for_examine" 00:31:01.130 } 00:31:01.130 ] 00:31:01.130 } 00:31:01.130 ] 00:31:01.130 } 00:31:01.130 [2024-11-20 13:51:58.353953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:01.130 [2024-11-20 13:51:58.406748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:01.130 [2024-11-20 13:51:58.451285] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:01.388  [2024-11-20T13:51:58.970Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:31:01.647 00:31:01.647 13:51:58 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:31:01.647 ************************************ 00:31:01.647 END TEST spdk_dd_basic_rw 00:31:01.647 ************************************ 00:31:01.647 00:31:01.647 real 0m16.088s 00:31:01.647 user 0m11.440s 00:31:01.647 sys 0m5.862s 00:31:01.647 13:51:58 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:01.647 13:51:58 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:31:01.647 13:51:58 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:31:01.647 13:51:58 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:01.647 13:51:58 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:01.647 13:51:58 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:31:01.647 ************************************ 00:31:01.647 START TEST spdk_dd_posix 00:31:01.647 ************************************ 00:31:01.647 13:51:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:31:01.647 * Looking for test storage... 00:31:01.647 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:31:01.647 13:51:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:01.647 13:51:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # lcov --version 00:31:01.647 13:51:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:01.906 13:51:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:01.906 13:51:59 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:01.906 13:51:59 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:01.906 13:51:59 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:01.906 13:51:59 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:31:01.906 13:51:59 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:31:01.906 13:51:59 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:31:01.906 13:51:59 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:31:01.906 13:51:59 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:31:01.906 13:51:59 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:31:01.906 13:51:59 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:31:01.906 13:51:59 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:01.906 13:51:59 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:31:01.906 13:51:59 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:31:01.906 13:51:59 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:01.906 13:51:59 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:01.906 13:51:59 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:31:01.906 13:51:59 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:31:01.906 13:51:59 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:01.906 13:51:59 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:31:01.906 13:51:59 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:31:01.906 13:51:59 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:31:01.906 13:51:59 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:31:01.906 13:51:59 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:01.906 13:51:59 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:31:01.906 13:51:59 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:31:01.906 13:51:59 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:01.906 13:51:59 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:01.906 13:51:59 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:31:01.906 13:51:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:01.906 13:51:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:01.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:01.906 --rc genhtml_branch_coverage=1 00:31:01.906 --rc genhtml_function_coverage=1 00:31:01.906 --rc genhtml_legend=1 00:31:01.906 --rc geninfo_all_blocks=1 00:31:01.906 --rc geninfo_unexecuted_blocks=1 00:31:01.906 00:31:01.906 ' 00:31:01.906 13:51:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:01.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:01.906 --rc genhtml_branch_coverage=1 00:31:01.906 --rc genhtml_function_coverage=1 00:31:01.906 --rc genhtml_legend=1 00:31:01.906 --rc geninfo_all_blocks=1 00:31:01.906 --rc geninfo_unexecuted_blocks=1 00:31:01.906 00:31:01.906 ' 00:31:01.906 13:51:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:01.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:01.906 --rc genhtml_branch_coverage=1 00:31:01.906 --rc genhtml_function_coverage=1 00:31:01.906 --rc genhtml_legend=1 00:31:01.906 --rc geninfo_all_blocks=1 00:31:01.906 --rc geninfo_unexecuted_blocks=1 00:31:01.906 00:31:01.906 ' 00:31:01.906 13:51:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:01.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:01.906 --rc genhtml_branch_coverage=1 00:31:01.906 --rc genhtml_function_coverage=1 00:31:01.906 --rc genhtml_legend=1 00:31:01.906 --rc geninfo_all_blocks=1 00:31:01.906 --rc geninfo_unexecuted_blocks=1 00:31:01.906 00:31:01.906 ' 00:31:01.906 13:51:59 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:01.906 13:51:59 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:31:01.906 13:51:59 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:01.906 13:51:59 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:01.906 13:51:59 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:01.906 13:51:59 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.906 13:51:59 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.906 13:51:59 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.906 13:51:59 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:31:01.906 13:51:59 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.906 13:51:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:31:01.906 13:51:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:31:01.906 13:51:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:31:01.906 13:51:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:31:01.906 13:51:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:31:01.906 13:51:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:31:01.906 13:51:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:31:01.906 13:51:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:31:01.906 * First test run, liburing in use 00:31:01.906 13:51:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:31:01.906 13:51:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:01.906 13:51:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:01.906 13:51:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:31:01.907 ************************************ 00:31:01.907 START TEST dd_flag_append 00:31:01.907 ************************************ 00:31:01.907 13:51:59 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1129 -- # append 00:31:01.907 13:51:59 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:31:01.907 13:51:59 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:31:01.907 13:51:59 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:31:01.907 13:51:59 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:31:01.907 13:51:59 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:31:01.907 13:51:59 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=sg3x9mskj29yejfs0bqgei0vw06u4d2t 00:31:01.907 13:51:59 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:31:01.907 13:51:59 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:31:01.907 13:51:59 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:31:01.907 13:51:59 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=mtzk6w659f0v23l1prbkyn8ra3ocdyfx 00:31:01.907 13:51:59 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s sg3x9mskj29yejfs0bqgei0vw06u4d2t 00:31:01.907 13:51:59 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s mtzk6w659f0v23l1prbkyn8ra3ocdyfx 00:31:01.907 13:51:59 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:31:01.907 [2024-11-20 13:51:59.095781] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:31:01.907 [2024-11-20 13:51:59.095954] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60321 ] 00:31:02.164 [2024-11-20 13:51:59.241642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:02.165 [2024-11-20 13:51:59.296222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:02.165 [2024-11-20 13:51:59.337806] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:02.165  [2024-11-20T13:51:59.746Z] Copying: 32/32 [B] (average 31 kBps) 00:31:02.423 00:31:02.423 13:51:59 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ mtzk6w659f0v23l1prbkyn8ra3ocdyfxsg3x9mskj29yejfs0bqgei0vw06u4d2t == \m\t\z\k\6\w\6\5\9\f\0\v\2\3\l\1\p\r\b\k\y\n\8\r\a\3\o\c\d\y\f\x\s\g\3\x\9\m\s\k\j\2\9\y\e\j\f\s\0\b\q\g\e\i\0\v\w\0\6\u\4\d\2\t ]] 00:31:02.423 00:31:02.423 real 0m0.489s 00:31:02.423 user 0m0.263s 00:31:02.423 sys 0m0.227s 00:31:02.423 13:51:59 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:02.423 13:51:59 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:31:02.423 ************************************ 00:31:02.423 END TEST dd_flag_append 00:31:02.423 ************************************ 00:31:02.423 13:51:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:31:02.423 13:51:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:02.423 13:51:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:02.423 13:51:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:31:02.423 ************************************ 00:31:02.423 START TEST dd_flag_directory 00:31:02.423 ************************************ 00:31:02.423 13:51:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1129 -- # directory 00:31:02.423 13:51:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:31:02.423 13:51:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:31:02.423 13:51:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:31:02.423 13:51:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:02.423 13:51:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:02.423 13:51:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:02.423 13:51:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:02.423 13:51:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:02.423 13:51:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:02.423 13:51:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:02.423 13:51:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:31:02.423 13:51:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:31:02.423 [2024-11-20 13:51:59.650490] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:31:02.423 [2024-11-20 13:51:59.650563] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60344 ] 00:31:02.682 [2024-11-20 13:51:59.800257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:02.682 [2024-11-20 13:51:59.855780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:02.682 [2024-11-20 13:51:59.898551] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:02.682 [2024-11-20 13:51:59.930808] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:31:02.682 [2024-11-20 13:51:59.930881] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:31:02.682 [2024-11-20 13:51:59.930893] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:02.941 [2024-11-20 13:52:00.029433] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:31:02.941 13:52:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:31:02.941 13:52:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:02.941 13:52:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:31:02.941 13:52:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:31:02.941 13:52:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:31:02.941 13:52:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:02.941 13:52:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:31:02.941 13:52:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:31:02.941 13:52:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:31:02.941 13:52:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:02.941 13:52:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:02.941 13:52:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:02.941 13:52:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:02.941 13:52:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:02.941 13:52:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:02.941 13:52:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:02.941 13:52:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:31:02.941 13:52:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:31:02.941 [2024-11-20 13:52:00.150477] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:31:02.941 [2024-11-20 13:52:00.150620] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60359 ] 00:31:03.201 [2024-11-20 13:52:00.299908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:03.201 [2024-11-20 13:52:00.354205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:03.201 [2024-11-20 13:52:00.395937] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:03.201 [2024-11-20 13:52:00.427216] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:31:03.201 [2024-11-20 13:52:00.427358] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:31:03.201 [2024-11-20 13:52:00.427393] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:03.461 [2024-11-20 13:52:00.524802] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:31:03.461 13:52:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:31:03.461 13:52:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:03.461 13:52:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:31:03.461 13:52:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:31:03.461 13:52:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:31:03.461 13:52:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:03.461 00:31:03.461 real 0m0.999s 00:31:03.461 user 0m0.539s 00:31:03.461 sys 0m0.251s 00:31:03.461 13:52:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:03.461 13:52:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:31:03.461 ************************************ 00:31:03.461 END TEST dd_flag_directory 00:31:03.461 ************************************ 00:31:03.461 13:52:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:31:03.461 13:52:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:03.461 13:52:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:03.461 13:52:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:31:03.461 ************************************ 00:31:03.461 START TEST dd_flag_nofollow 00:31:03.461 ************************************ 00:31:03.461 13:52:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1129 -- # nofollow 00:31:03.461 13:52:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:31:03.461 13:52:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:31:03.461 13:52:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:31:03.461 13:52:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:31:03.461 13:52:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:31:03.461 13:52:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:31:03.461 13:52:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:31:03.461 13:52:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:03.461 13:52:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:03.461 13:52:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:03.461 13:52:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:03.461 13:52:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:03.461 13:52:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:03.461 13:52:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:03.461 13:52:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:31:03.461 13:52:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:31:03.461 [2024-11-20 13:52:00.719760] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:31:03.461 [2024-11-20 13:52:00.719829] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60382 ] 00:31:03.721 [2024-11-20 13:52:00.868982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:03.721 [2024-11-20 13:52:00.924517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:03.721 [2024-11-20 13:52:00.966485] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:03.721 [2024-11-20 13:52:00.998480] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:31:03.721 [2024-11-20 13:52:00.998530] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:31:03.721 [2024-11-20 13:52:00.998543] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:03.982 [2024-11-20 13:52:01.097075] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:31:03.982 13:52:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:31:03.982 13:52:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:03.982 13:52:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:31:03.982 13:52:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:31:03.982 13:52:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:31:03.982 13:52:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:03.982 13:52:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:31:03.982 13:52:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:31:03.982 13:52:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:31:03.982 13:52:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:03.982 13:52:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:03.982 13:52:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:03.982 13:52:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:03.982 13:52:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:03.982 13:52:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:03.982 13:52:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:03.982 13:52:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:31:03.982 13:52:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:31:03.982 [2024-11-20 13:52:01.217344] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:31:03.982 [2024-11-20 13:52:01.217490] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60397 ] 00:31:04.241 [2024-11-20 13:52:01.366125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:04.241 [2024-11-20 13:52:01.419980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:04.241 [2024-11-20 13:52:01.463028] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:04.241 [2024-11-20 13:52:01.494722] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:31:04.241 [2024-11-20 13:52:01.494771] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:31:04.241 [2024-11-20 13:52:01.494785] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:04.501 [2024-11-20 13:52:01.592765] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:31:04.501 13:52:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:31:04.501 13:52:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:04.501 13:52:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:31:04.501 13:52:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:31:04.501 13:52:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:31:04.501 13:52:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:04.501 13:52:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:31:04.501 13:52:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:31:04.501 13:52:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:31:04.501 13:52:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:31:04.501 [2024-11-20 13:52:01.717497] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:31:04.501 [2024-11-20 13:52:01.717560] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60400 ] 00:31:04.760 [2024-11-20 13:52:01.863174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:04.760 [2024-11-20 13:52:01.915454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:04.760 [2024-11-20 13:52:01.956951] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:04.760  [2024-11-20T13:52:02.344Z] Copying: 512/512 [B] (average 500 kBps) 00:31:05.021 00:31:05.021 13:52:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ w9evrn0pxg12ycm9fhgz4o7z3tbyf1mpynddlyig9w6qas3rzijmsrjcsy4qb6u3xq0qo2v694re5pidu17hhwstpmxhrmo5lmpc2q0u813hev4jw6xyx1pg6jxbclhqck2qb9dz7sqt4kkdvel9oxcokzvaykqrnzd7fvqdb81wc0rwrcphezt0b7knfke1fmjns99n66jt012iu4ibdoey3rfrj5rv699visk8z0ama5s576iu9zz4rfm8t9mswi03w1qnbm6jyyze5162fxm09qx1m18lbctr4aykwkk593jtmt2vrv8hd5mjtqzyl17mkt8ekc0xjys08ktsbuxgw2jhgzwk1ct1gqnwv9afd0eau70ry9t3p58wx8hgfh5xgqj6wnxr0mq8des6lsf872hy1lrgg1miwwrchbs3pw2wr5kqv5cb5ems2qb5sdil58fk3tzwbr8he9o3b5oyear2nict0giiynra4sphqxf63mpdwqwqln6ah06u == \w\9\e\v\r\n\0\p\x\g\1\2\y\c\m\9\f\h\g\z\4\o\7\z\3\t\b\y\f\1\m\p\y\n\d\d\l\y\i\g\9\w\6\q\a\s\3\r\z\i\j\m\s\r\j\c\s\y\4\q\b\6\u\3\x\q\0\q\o\2\v\6\9\4\r\e\5\p\i\d\u\1\7\h\h\w\s\t\p\m\x\h\r\m\o\5\l\m\p\c\2\q\0\u\8\1\3\h\e\v\4\j\w\6\x\y\x\1\p\g\6\j\x\b\c\l\h\q\c\k\2\q\b\9\d\z\7\s\q\t\4\k\k\d\v\e\l\9\o\x\c\o\k\z\v\a\y\k\q\r\n\z\d\7\f\v\q\d\b\8\1\w\c\0\r\w\r\c\p\h\e\z\t\0\b\7\k\n\f\k\e\1\f\m\j\n\s\9\9\n\6\6\j\t\0\1\2\i\u\4\i\b\d\o\e\y\3\r\f\r\j\5\r\v\6\9\9\v\i\s\k\8\z\0\a\m\a\5\s\5\7\6\i\u\9\z\z\4\r\f\m\8\t\9\m\s\w\i\0\3\w\1\q\n\b\m\6\j\y\y\z\e\5\1\6\2\f\x\m\0\9\q\x\1\m\1\8\l\b\c\t\r\4\a\y\k\w\k\k\5\9\3\j\t\m\t\2\v\r\v\8\h\d\5\m\j\t\q\z\y\l\1\7\m\k\t\8\e\k\c\0\x\j\y\s\0\8\k\t\s\b\u\x\g\w\2\j\h\g\z\w\k\1\c\t\1\g\q\n\w\v\9\a\f\d\0\e\a\u\7\0\r\y\9\t\3\p\5\8\w\x\8\h\g\f\h\5\x\g\q\j\6\w\n\x\r\0\m\q\8\d\e\s\6\l\s\f\8\7\2\h\y\1\l\r\g\g\1\m\i\w\w\r\c\h\b\s\3\p\w\2\w\r\5\k\q\v\5\c\b\5\e\m\s\2\q\b\5\s\d\i\l\5\8\f\k\3\t\z\w\b\r\8\h\e\9\o\3\b\5\o\y\e\a\r\2\n\i\c\t\0\g\i\i\y\n\r\a\4\s\p\h\q\x\f\6\3\m\p\d\w\q\w\q\l\n\6\a\h\0\6\u ]] 00:31:05.021 00:31:05.021 real 0m1.494s 00:31:05.021 user 0m0.825s 00:31:05.021 sys 0m0.462s 00:31:05.021 13:52:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:05.021 ************************************ 00:31:05.021 END TEST dd_flag_nofollow 00:31:05.021 ************************************ 00:31:05.021 13:52:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:31:05.021 13:52:02 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:31:05.021 13:52:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:05.021 13:52:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:05.021 13:52:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:31:05.021 ************************************ 00:31:05.021 START TEST dd_flag_noatime 00:31:05.021 ************************************ 00:31:05.021 13:52:02 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1129 -- # noatime 00:31:05.021 13:52:02 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:31:05.021 13:52:02 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:31:05.021 13:52:02 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:31:05.021 13:52:02 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:31:05.021 13:52:02 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:31:05.021 13:52:02 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:31:05.021 13:52:02 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1732110721 00:31:05.021 13:52:02 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:31:05.021 13:52:02 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1732110722 00:31:05.021 13:52:02 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:31:05.961 13:52:03 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:31:06.221 [2024-11-20 13:52:03.299176] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:31:06.221 [2024-11-20 13:52:03.299322] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60448 ] 00:31:06.221 [2024-11-20 13:52:03.448807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:06.221 [2024-11-20 13:52:03.500842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:06.221 [2024-11-20 13:52:03.541797] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:06.480  [2024-11-20T13:52:03.803Z] Copying: 512/512 [B] (average 500 kBps) 00:31:06.480 00:31:06.480 13:52:03 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:31:06.480 13:52:03 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1732110721 )) 00:31:06.480 13:52:03 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:31:06.480 13:52:03 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1732110722 )) 00:31:06.480 13:52:03 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:31:06.480 [2024-11-20 13:52:03.782438] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:31:06.480 [2024-11-20 13:52:03.782512] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60456 ] 00:31:06.739 [2024-11-20 13:52:03.930482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:06.739 [2024-11-20 13:52:03.982582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:06.739 [2024-11-20 13:52:04.022790] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:06.739  [2024-11-20T13:52:04.321Z] Copying: 512/512 [B] (average 500 kBps) 00:31:06.998 00:31:06.998 13:52:04 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:31:06.998 13:52:04 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1732110724 )) 00:31:06.998 00:31:06.998 real 0m2.005s 00:31:06.998 user 0m0.543s 00:31:06.998 sys 0m0.469s 00:31:06.998 13:52:04 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:06.998 ************************************ 00:31:06.998 END TEST dd_flag_noatime 00:31:06.998 ************************************ 00:31:06.998 13:52:04 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:31:06.998 13:52:04 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:31:06.998 13:52:04 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:06.998 13:52:04 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:06.998 13:52:04 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:31:06.998 ************************************ 00:31:06.998 START TEST dd_flags_misc 00:31:06.998 ************************************ 00:31:06.998 13:52:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1129 -- # io 00:31:06.998 13:52:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:31:06.998 13:52:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:31:06.998 13:52:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:31:06.998 13:52:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:31:06.998 13:52:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:31:06.998 13:52:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:31:06.998 13:52:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:31:06.998 13:52:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:31:06.998 13:52:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:31:07.258 [2024-11-20 13:52:04.344474] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:31:07.258 [2024-11-20 13:52:04.344605] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60490 ] 00:31:07.258 [2024-11-20 13:52:04.493607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:07.258 [2024-11-20 13:52:04.543360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:07.517 [2024-11-20 13:52:04.584278] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:07.517  [2024-11-20T13:52:04.840Z] Copying: 512/512 [B] (average 500 kBps) 00:31:07.517 00:31:07.517 13:52:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 5etwo6jz04if8h6yz9ncoj3ogxm3fx4jlqeu5ukfjmyd657q78bnk4y0434h8gvb6ucj2coi6iuui4ypfl7mvax1w5v0zme8rb9remkqxljsyk93yorab21ggyyh74e9r4eaid77nyzh0lqkcjrhlmk79wq6d4a8r7sovjsbbxkxzmfpnsr2ynpg18gwbdb0xg0k4r8x7of1x9ff78vkatbxjr7eb7lrt8xpzxw6ce7eu8bp76mzzuzpmlw3c2e2zuqhvu5lfti3bw5ur2vwu3nmbiv79q45xldngxzew8l8f83xtnbk0ogrjtdg9p7fwmv33xqwufmdqwm3gcgq990h8zho394vneu7m491hymj2rao9c6xpqpdoqxubr0wye96qx8lfk46rwd3j2nwm601gzs7dq23a4gubniu39dpzl9rqkh5wo97kdo3lprpnlrkpfovilez04v8x7dxfs97wc1odooe3mnaq64ve1ogf2xg5s3jvtce4owcm1zj == \5\e\t\w\o\6\j\z\0\4\i\f\8\h\6\y\z\9\n\c\o\j\3\o\g\x\m\3\f\x\4\j\l\q\e\u\5\u\k\f\j\m\y\d\6\5\7\q\7\8\b\n\k\4\y\0\4\3\4\h\8\g\v\b\6\u\c\j\2\c\o\i\6\i\u\u\i\4\y\p\f\l\7\m\v\a\x\1\w\5\v\0\z\m\e\8\r\b\9\r\e\m\k\q\x\l\j\s\y\k\9\3\y\o\r\a\b\2\1\g\g\y\y\h\7\4\e\9\r\4\e\a\i\d\7\7\n\y\z\h\0\l\q\k\c\j\r\h\l\m\k\7\9\w\q\6\d\4\a\8\r\7\s\o\v\j\s\b\b\x\k\x\z\m\f\p\n\s\r\2\y\n\p\g\1\8\g\w\b\d\b\0\x\g\0\k\4\r\8\x\7\o\f\1\x\9\f\f\7\8\v\k\a\t\b\x\j\r\7\e\b\7\l\r\t\8\x\p\z\x\w\6\c\e\7\e\u\8\b\p\7\6\m\z\z\u\z\p\m\l\w\3\c\2\e\2\z\u\q\h\v\u\5\l\f\t\i\3\b\w\5\u\r\2\v\w\u\3\n\m\b\i\v\7\9\q\4\5\x\l\d\n\g\x\z\e\w\8\l\8\f\8\3\x\t\n\b\k\0\o\g\r\j\t\d\g\9\p\7\f\w\m\v\3\3\x\q\w\u\f\m\d\q\w\m\3\g\c\g\q\9\9\0\h\8\z\h\o\3\9\4\v\n\e\u\7\m\4\9\1\h\y\m\j\2\r\a\o\9\c\6\x\p\q\p\d\o\q\x\u\b\r\0\w\y\e\9\6\q\x\8\l\f\k\4\6\r\w\d\3\j\2\n\w\m\6\0\1\g\z\s\7\d\q\2\3\a\4\g\u\b\n\i\u\3\9\d\p\z\l\9\r\q\k\h\5\w\o\9\7\k\d\o\3\l\p\r\p\n\l\r\k\p\f\o\v\i\l\e\z\0\4\v\8\x\7\d\x\f\s\9\7\w\c\1\o\d\o\o\e\3\m\n\a\q\6\4\v\e\1\o\g\f\2\x\g\5\s\3\j\v\t\c\e\4\o\w\c\m\1\z\j ]] 00:31:07.517 13:52:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:31:07.517 13:52:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:31:07.517 [2024-11-20 13:52:04.830770] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:31:07.517 [2024-11-20 13:52:04.830880] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60494 ] 00:31:07.775 [2024-11-20 13:52:04.979127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:07.775 [2024-11-20 13:52:05.034080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:07.775 [2024-11-20 13:52:05.077045] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:08.033  [2024-11-20T13:52:05.356Z] Copying: 512/512 [B] (average 500 kBps) 00:31:08.033 00:31:08.033 13:52:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 5etwo6jz04if8h6yz9ncoj3ogxm3fx4jlqeu5ukfjmyd657q78bnk4y0434h8gvb6ucj2coi6iuui4ypfl7mvax1w5v0zme8rb9remkqxljsyk93yorab21ggyyh74e9r4eaid77nyzh0lqkcjrhlmk79wq6d4a8r7sovjsbbxkxzmfpnsr2ynpg18gwbdb0xg0k4r8x7of1x9ff78vkatbxjr7eb7lrt8xpzxw6ce7eu8bp76mzzuzpmlw3c2e2zuqhvu5lfti3bw5ur2vwu3nmbiv79q45xldngxzew8l8f83xtnbk0ogrjtdg9p7fwmv33xqwufmdqwm3gcgq990h8zho394vneu7m491hymj2rao9c6xpqpdoqxubr0wye96qx8lfk46rwd3j2nwm601gzs7dq23a4gubniu39dpzl9rqkh5wo97kdo3lprpnlrkpfovilez04v8x7dxfs97wc1odooe3mnaq64ve1ogf2xg5s3jvtce4owcm1zj == \5\e\t\w\o\6\j\z\0\4\i\f\8\h\6\y\z\9\n\c\o\j\3\o\g\x\m\3\f\x\4\j\l\q\e\u\5\u\k\f\j\m\y\d\6\5\7\q\7\8\b\n\k\4\y\0\4\3\4\h\8\g\v\b\6\u\c\j\2\c\o\i\6\i\u\u\i\4\y\p\f\l\7\m\v\a\x\1\w\5\v\0\z\m\e\8\r\b\9\r\e\m\k\q\x\l\j\s\y\k\9\3\y\o\r\a\b\2\1\g\g\y\y\h\7\4\e\9\r\4\e\a\i\d\7\7\n\y\z\h\0\l\q\k\c\j\r\h\l\m\k\7\9\w\q\6\d\4\a\8\r\7\s\o\v\j\s\b\b\x\k\x\z\m\f\p\n\s\r\2\y\n\p\g\1\8\g\w\b\d\b\0\x\g\0\k\4\r\8\x\7\o\f\1\x\9\f\f\7\8\v\k\a\t\b\x\j\r\7\e\b\7\l\r\t\8\x\p\z\x\w\6\c\e\7\e\u\8\b\p\7\6\m\z\z\u\z\p\m\l\w\3\c\2\e\2\z\u\q\h\v\u\5\l\f\t\i\3\b\w\5\u\r\2\v\w\u\3\n\m\b\i\v\7\9\q\4\5\x\l\d\n\g\x\z\e\w\8\l\8\f\8\3\x\t\n\b\k\0\o\g\r\j\t\d\g\9\p\7\f\w\m\v\3\3\x\q\w\u\f\m\d\q\w\m\3\g\c\g\q\9\9\0\h\8\z\h\o\3\9\4\v\n\e\u\7\m\4\9\1\h\y\m\j\2\r\a\o\9\c\6\x\p\q\p\d\o\q\x\u\b\r\0\w\y\e\9\6\q\x\8\l\f\k\4\6\r\w\d\3\j\2\n\w\m\6\0\1\g\z\s\7\d\q\2\3\a\4\g\u\b\n\i\u\3\9\d\p\z\l\9\r\q\k\h\5\w\o\9\7\k\d\o\3\l\p\r\p\n\l\r\k\p\f\o\v\i\l\e\z\0\4\v\8\x\7\d\x\f\s\9\7\w\c\1\o\d\o\o\e\3\m\n\a\q\6\4\v\e\1\o\g\f\2\x\g\5\s\3\j\v\t\c\e\4\o\w\c\m\1\z\j ]] 00:31:08.033 13:52:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:31:08.033 13:52:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:31:08.033 [2024-11-20 13:52:05.310980] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:31:08.033 [2024-11-20 13:52:05.311058] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60509 ] 00:31:08.292 [2024-11-20 13:52:05.459860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:08.292 [2024-11-20 13:52:05.514936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:08.292 [2024-11-20 13:52:05.558185] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:08.292  [2024-11-20T13:52:05.873Z] Copying: 512/512 [B] (average 83 kBps) 00:31:08.550 00:31:08.550 13:52:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 5etwo6jz04if8h6yz9ncoj3ogxm3fx4jlqeu5ukfjmyd657q78bnk4y0434h8gvb6ucj2coi6iuui4ypfl7mvax1w5v0zme8rb9remkqxljsyk93yorab21ggyyh74e9r4eaid77nyzh0lqkcjrhlmk79wq6d4a8r7sovjsbbxkxzmfpnsr2ynpg18gwbdb0xg0k4r8x7of1x9ff78vkatbxjr7eb7lrt8xpzxw6ce7eu8bp76mzzuzpmlw3c2e2zuqhvu5lfti3bw5ur2vwu3nmbiv79q45xldngxzew8l8f83xtnbk0ogrjtdg9p7fwmv33xqwufmdqwm3gcgq990h8zho394vneu7m491hymj2rao9c6xpqpdoqxubr0wye96qx8lfk46rwd3j2nwm601gzs7dq23a4gubniu39dpzl9rqkh5wo97kdo3lprpnlrkpfovilez04v8x7dxfs97wc1odooe3mnaq64ve1ogf2xg5s3jvtce4owcm1zj == \5\e\t\w\o\6\j\z\0\4\i\f\8\h\6\y\z\9\n\c\o\j\3\o\g\x\m\3\f\x\4\j\l\q\e\u\5\u\k\f\j\m\y\d\6\5\7\q\7\8\b\n\k\4\y\0\4\3\4\h\8\g\v\b\6\u\c\j\2\c\o\i\6\i\u\u\i\4\y\p\f\l\7\m\v\a\x\1\w\5\v\0\z\m\e\8\r\b\9\r\e\m\k\q\x\l\j\s\y\k\9\3\y\o\r\a\b\2\1\g\g\y\y\h\7\4\e\9\r\4\e\a\i\d\7\7\n\y\z\h\0\l\q\k\c\j\r\h\l\m\k\7\9\w\q\6\d\4\a\8\r\7\s\o\v\j\s\b\b\x\k\x\z\m\f\p\n\s\r\2\y\n\p\g\1\8\g\w\b\d\b\0\x\g\0\k\4\r\8\x\7\o\f\1\x\9\f\f\7\8\v\k\a\t\b\x\j\r\7\e\b\7\l\r\t\8\x\p\z\x\w\6\c\e\7\e\u\8\b\p\7\6\m\z\z\u\z\p\m\l\w\3\c\2\e\2\z\u\q\h\v\u\5\l\f\t\i\3\b\w\5\u\r\2\v\w\u\3\n\m\b\i\v\7\9\q\4\5\x\l\d\n\g\x\z\e\w\8\l\8\f\8\3\x\t\n\b\k\0\o\g\r\j\t\d\g\9\p\7\f\w\m\v\3\3\x\q\w\u\f\m\d\q\w\m\3\g\c\g\q\9\9\0\h\8\z\h\o\3\9\4\v\n\e\u\7\m\4\9\1\h\y\m\j\2\r\a\o\9\c\6\x\p\q\p\d\o\q\x\u\b\r\0\w\y\e\9\6\q\x\8\l\f\k\4\6\r\w\d\3\j\2\n\w\m\6\0\1\g\z\s\7\d\q\2\3\a\4\g\u\b\n\i\u\3\9\d\p\z\l\9\r\q\k\h\5\w\o\9\7\k\d\o\3\l\p\r\p\n\l\r\k\p\f\o\v\i\l\e\z\0\4\v\8\x\7\d\x\f\s\9\7\w\c\1\o\d\o\o\e\3\m\n\a\q\6\4\v\e\1\o\g\f\2\x\g\5\s\3\j\v\t\c\e\4\o\w\c\m\1\z\j ]] 00:31:08.550 13:52:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:31:08.550 13:52:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:31:08.550 [2024-11-20 13:52:05.799019] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:31:08.551 [2024-11-20 13:52:05.799168] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60513 ] 00:31:08.809 [2024-11-20 13:52:05.936947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:08.809 [2024-11-20 13:52:05.994524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:08.809 [2024-11-20 13:52:06.037714] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:08.809  [2024-11-20T13:52:06.393Z] Copying: 512/512 [B] (average 166 kBps) 00:31:09.070 00:31:09.070 13:52:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 5etwo6jz04if8h6yz9ncoj3ogxm3fx4jlqeu5ukfjmyd657q78bnk4y0434h8gvb6ucj2coi6iuui4ypfl7mvax1w5v0zme8rb9remkqxljsyk93yorab21ggyyh74e9r4eaid77nyzh0lqkcjrhlmk79wq6d4a8r7sovjsbbxkxzmfpnsr2ynpg18gwbdb0xg0k4r8x7of1x9ff78vkatbxjr7eb7lrt8xpzxw6ce7eu8bp76mzzuzpmlw3c2e2zuqhvu5lfti3bw5ur2vwu3nmbiv79q45xldngxzew8l8f83xtnbk0ogrjtdg9p7fwmv33xqwufmdqwm3gcgq990h8zho394vneu7m491hymj2rao9c6xpqpdoqxubr0wye96qx8lfk46rwd3j2nwm601gzs7dq23a4gubniu39dpzl9rqkh5wo97kdo3lprpnlrkpfovilez04v8x7dxfs97wc1odooe3mnaq64ve1ogf2xg5s3jvtce4owcm1zj == \5\e\t\w\o\6\j\z\0\4\i\f\8\h\6\y\z\9\n\c\o\j\3\o\g\x\m\3\f\x\4\j\l\q\e\u\5\u\k\f\j\m\y\d\6\5\7\q\7\8\b\n\k\4\y\0\4\3\4\h\8\g\v\b\6\u\c\j\2\c\o\i\6\i\u\u\i\4\y\p\f\l\7\m\v\a\x\1\w\5\v\0\z\m\e\8\r\b\9\r\e\m\k\q\x\l\j\s\y\k\9\3\y\o\r\a\b\2\1\g\g\y\y\h\7\4\e\9\r\4\e\a\i\d\7\7\n\y\z\h\0\l\q\k\c\j\r\h\l\m\k\7\9\w\q\6\d\4\a\8\r\7\s\o\v\j\s\b\b\x\k\x\z\m\f\p\n\s\r\2\y\n\p\g\1\8\g\w\b\d\b\0\x\g\0\k\4\r\8\x\7\o\f\1\x\9\f\f\7\8\v\k\a\t\b\x\j\r\7\e\b\7\l\r\t\8\x\p\z\x\w\6\c\e\7\e\u\8\b\p\7\6\m\z\z\u\z\p\m\l\w\3\c\2\e\2\z\u\q\h\v\u\5\l\f\t\i\3\b\w\5\u\r\2\v\w\u\3\n\m\b\i\v\7\9\q\4\5\x\l\d\n\g\x\z\e\w\8\l\8\f\8\3\x\t\n\b\k\0\o\g\r\j\t\d\g\9\p\7\f\w\m\v\3\3\x\q\w\u\f\m\d\q\w\m\3\g\c\g\q\9\9\0\h\8\z\h\o\3\9\4\v\n\e\u\7\m\4\9\1\h\y\m\j\2\r\a\o\9\c\6\x\p\q\p\d\o\q\x\u\b\r\0\w\y\e\9\6\q\x\8\l\f\k\4\6\r\w\d\3\j\2\n\w\m\6\0\1\g\z\s\7\d\q\2\3\a\4\g\u\b\n\i\u\3\9\d\p\z\l\9\r\q\k\h\5\w\o\9\7\k\d\o\3\l\p\r\p\n\l\r\k\p\f\o\v\i\l\e\z\0\4\v\8\x\7\d\x\f\s\9\7\w\c\1\o\d\o\o\e\3\m\n\a\q\6\4\v\e\1\o\g\f\2\x\g\5\s\3\j\v\t\c\e\4\o\w\c\m\1\z\j ]] 00:31:09.070 13:52:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:31:09.070 13:52:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:31:09.070 13:52:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:31:09.070 13:52:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:31:09.070 13:52:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:31:09.070 13:52:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:31:09.070 [2024-11-20 13:52:06.298646] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:31:09.070 [2024-11-20 13:52:06.298727] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60527 ] 00:31:09.329 [2024-11-20 13:52:06.447273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:09.329 [2024-11-20 13:52:06.504871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:09.329 [2024-11-20 13:52:06.548819] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:09.329  [2024-11-20T13:52:06.911Z] Copying: 512/512 [B] (average 500 kBps) 00:31:09.588 00:31:09.588 13:52:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ pk9paq8vff87npvqvmpyioesz7s2q42qssj175s448kjdqc2gf7964ddq3pgogcmds7rozz3bqgr9ekcywnpy6o54vmryu7tducz634da769hd5q2xgvlxa22lcppr3fdw0lw851p36d53a3ejc2tcaj2i0vse88bz8cpjw7dqpk7kb3uqxhseq84dnoxegni0uwqjaiw8dn2ytnt9of5qfd3r9v50mzdsxjujljpfipf9p30yg1ybg706lfysp3tk13syy5hrpt4bngrr2o1llxpiahb55ebhwve56teyzysek59iidfxdwkp0q3rtiaqf5hlopqg9jk54v6whzqusmrggkmstyc1u2nawsy79qvdu2ooduyjqp01o4254hy2swebn4b0yzrri6idbbom83m409rao2s89a6rig1ip7h8bvczjm9ia5jpji8c5shozih0a3ce9nskwacmngjox4ubj766fqorhz8fnuje0pftkueuv01709ssqooaxx == \p\k\9\p\a\q\8\v\f\f\8\7\n\p\v\q\v\m\p\y\i\o\e\s\z\7\s\2\q\4\2\q\s\s\j\1\7\5\s\4\4\8\k\j\d\q\c\2\g\f\7\9\6\4\d\d\q\3\p\g\o\g\c\m\d\s\7\r\o\z\z\3\b\q\g\r\9\e\k\c\y\w\n\p\y\6\o\5\4\v\m\r\y\u\7\t\d\u\c\z\6\3\4\d\a\7\6\9\h\d\5\q\2\x\g\v\l\x\a\2\2\l\c\p\p\r\3\f\d\w\0\l\w\8\5\1\p\3\6\d\5\3\a\3\e\j\c\2\t\c\a\j\2\i\0\v\s\e\8\8\b\z\8\c\p\j\w\7\d\q\p\k\7\k\b\3\u\q\x\h\s\e\q\8\4\d\n\o\x\e\g\n\i\0\u\w\q\j\a\i\w\8\d\n\2\y\t\n\t\9\o\f\5\q\f\d\3\r\9\v\5\0\m\z\d\s\x\j\u\j\l\j\p\f\i\p\f\9\p\3\0\y\g\1\y\b\g\7\0\6\l\f\y\s\p\3\t\k\1\3\s\y\y\5\h\r\p\t\4\b\n\g\r\r\2\o\1\l\l\x\p\i\a\h\b\5\5\e\b\h\w\v\e\5\6\t\e\y\z\y\s\e\k\5\9\i\i\d\f\x\d\w\k\p\0\q\3\r\t\i\a\q\f\5\h\l\o\p\q\g\9\j\k\5\4\v\6\w\h\z\q\u\s\m\r\g\g\k\m\s\t\y\c\1\u\2\n\a\w\s\y\7\9\q\v\d\u\2\o\o\d\u\y\j\q\p\0\1\o\4\2\5\4\h\y\2\s\w\e\b\n\4\b\0\y\z\r\r\i\6\i\d\b\b\o\m\8\3\m\4\0\9\r\a\o\2\s\8\9\a\6\r\i\g\1\i\p\7\h\8\b\v\c\z\j\m\9\i\a\5\j\p\j\i\8\c\5\s\h\o\z\i\h\0\a\3\c\e\9\n\s\k\w\a\c\m\n\g\j\o\x\4\u\b\j\7\6\6\f\q\o\r\h\z\8\f\n\u\j\e\0\p\f\t\k\u\e\u\v\0\1\7\0\9\s\s\q\o\o\a\x\x ]] 00:31:09.588 13:52:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:31:09.588 13:52:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:31:09.588 [2024-11-20 13:52:06.797220] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:31:09.588 [2024-11-20 13:52:06.797358] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60534 ] 00:31:09.847 [2024-11-20 13:52:06.987517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:09.847 [2024-11-20 13:52:07.042439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:09.847 [2024-11-20 13:52:07.086105] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:09.847  [2024-11-20T13:52:07.432Z] Copying: 512/512 [B] (average 500 kBps) 00:31:10.109 00:31:10.109 13:52:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ pk9paq8vff87npvqvmpyioesz7s2q42qssj175s448kjdqc2gf7964ddq3pgogcmds7rozz3bqgr9ekcywnpy6o54vmryu7tducz634da769hd5q2xgvlxa22lcppr3fdw0lw851p36d53a3ejc2tcaj2i0vse88bz8cpjw7dqpk7kb3uqxhseq84dnoxegni0uwqjaiw8dn2ytnt9of5qfd3r9v50mzdsxjujljpfipf9p30yg1ybg706lfysp3tk13syy5hrpt4bngrr2o1llxpiahb55ebhwve56teyzysek59iidfxdwkp0q3rtiaqf5hlopqg9jk54v6whzqusmrggkmstyc1u2nawsy79qvdu2ooduyjqp01o4254hy2swebn4b0yzrri6idbbom83m409rao2s89a6rig1ip7h8bvczjm9ia5jpji8c5shozih0a3ce9nskwacmngjox4ubj766fqorhz8fnuje0pftkueuv01709ssqooaxx == \p\k\9\p\a\q\8\v\f\f\8\7\n\p\v\q\v\m\p\y\i\o\e\s\z\7\s\2\q\4\2\q\s\s\j\1\7\5\s\4\4\8\k\j\d\q\c\2\g\f\7\9\6\4\d\d\q\3\p\g\o\g\c\m\d\s\7\r\o\z\z\3\b\q\g\r\9\e\k\c\y\w\n\p\y\6\o\5\4\v\m\r\y\u\7\t\d\u\c\z\6\3\4\d\a\7\6\9\h\d\5\q\2\x\g\v\l\x\a\2\2\l\c\p\p\r\3\f\d\w\0\l\w\8\5\1\p\3\6\d\5\3\a\3\e\j\c\2\t\c\a\j\2\i\0\v\s\e\8\8\b\z\8\c\p\j\w\7\d\q\p\k\7\k\b\3\u\q\x\h\s\e\q\8\4\d\n\o\x\e\g\n\i\0\u\w\q\j\a\i\w\8\d\n\2\y\t\n\t\9\o\f\5\q\f\d\3\r\9\v\5\0\m\z\d\s\x\j\u\j\l\j\p\f\i\p\f\9\p\3\0\y\g\1\y\b\g\7\0\6\l\f\y\s\p\3\t\k\1\3\s\y\y\5\h\r\p\t\4\b\n\g\r\r\2\o\1\l\l\x\p\i\a\h\b\5\5\e\b\h\w\v\e\5\6\t\e\y\z\y\s\e\k\5\9\i\i\d\f\x\d\w\k\p\0\q\3\r\t\i\a\q\f\5\h\l\o\p\q\g\9\j\k\5\4\v\6\w\h\z\q\u\s\m\r\g\g\k\m\s\t\y\c\1\u\2\n\a\w\s\y\7\9\q\v\d\u\2\o\o\d\u\y\j\q\p\0\1\o\4\2\5\4\h\y\2\s\w\e\b\n\4\b\0\y\z\r\r\i\6\i\d\b\b\o\m\8\3\m\4\0\9\r\a\o\2\s\8\9\a\6\r\i\g\1\i\p\7\h\8\b\v\c\z\j\m\9\i\a\5\j\p\j\i\8\c\5\s\h\o\z\i\h\0\a\3\c\e\9\n\s\k\w\a\c\m\n\g\j\o\x\4\u\b\j\7\6\6\f\q\o\r\h\z\8\f\n\u\j\e\0\p\f\t\k\u\e\u\v\0\1\7\0\9\s\s\q\o\o\a\x\x ]] 00:31:10.109 13:52:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:31:10.109 13:52:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:31:10.109 [2024-11-20 13:52:07.333424] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:31:10.109 [2024-11-20 13:52:07.333853] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60544 ] 00:31:10.382 [2024-11-20 13:52:07.484704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:10.382 [2024-11-20 13:52:07.540977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:10.382 [2024-11-20 13:52:07.584604] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:10.382  [2024-11-20T13:52:07.964Z] Copying: 512/512 [B] (average 250 kBps) 00:31:10.641 00:31:10.641 13:52:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ pk9paq8vff87npvqvmpyioesz7s2q42qssj175s448kjdqc2gf7964ddq3pgogcmds7rozz3bqgr9ekcywnpy6o54vmryu7tducz634da769hd5q2xgvlxa22lcppr3fdw0lw851p36d53a3ejc2tcaj2i0vse88bz8cpjw7dqpk7kb3uqxhseq84dnoxegni0uwqjaiw8dn2ytnt9of5qfd3r9v50mzdsxjujljpfipf9p30yg1ybg706lfysp3tk13syy5hrpt4bngrr2o1llxpiahb55ebhwve56teyzysek59iidfxdwkp0q3rtiaqf5hlopqg9jk54v6whzqusmrggkmstyc1u2nawsy79qvdu2ooduyjqp01o4254hy2swebn4b0yzrri6idbbom83m409rao2s89a6rig1ip7h8bvczjm9ia5jpji8c5shozih0a3ce9nskwacmngjox4ubj766fqorhz8fnuje0pftkueuv01709ssqooaxx == \p\k\9\p\a\q\8\v\f\f\8\7\n\p\v\q\v\m\p\y\i\o\e\s\z\7\s\2\q\4\2\q\s\s\j\1\7\5\s\4\4\8\k\j\d\q\c\2\g\f\7\9\6\4\d\d\q\3\p\g\o\g\c\m\d\s\7\r\o\z\z\3\b\q\g\r\9\e\k\c\y\w\n\p\y\6\o\5\4\v\m\r\y\u\7\t\d\u\c\z\6\3\4\d\a\7\6\9\h\d\5\q\2\x\g\v\l\x\a\2\2\l\c\p\p\r\3\f\d\w\0\l\w\8\5\1\p\3\6\d\5\3\a\3\e\j\c\2\t\c\a\j\2\i\0\v\s\e\8\8\b\z\8\c\p\j\w\7\d\q\p\k\7\k\b\3\u\q\x\h\s\e\q\8\4\d\n\o\x\e\g\n\i\0\u\w\q\j\a\i\w\8\d\n\2\y\t\n\t\9\o\f\5\q\f\d\3\r\9\v\5\0\m\z\d\s\x\j\u\j\l\j\p\f\i\p\f\9\p\3\0\y\g\1\y\b\g\7\0\6\l\f\y\s\p\3\t\k\1\3\s\y\y\5\h\r\p\t\4\b\n\g\r\r\2\o\1\l\l\x\p\i\a\h\b\5\5\e\b\h\w\v\e\5\6\t\e\y\z\y\s\e\k\5\9\i\i\d\f\x\d\w\k\p\0\q\3\r\t\i\a\q\f\5\h\l\o\p\q\g\9\j\k\5\4\v\6\w\h\z\q\u\s\m\r\g\g\k\m\s\t\y\c\1\u\2\n\a\w\s\y\7\9\q\v\d\u\2\o\o\d\u\y\j\q\p\0\1\o\4\2\5\4\h\y\2\s\w\e\b\n\4\b\0\y\z\r\r\i\6\i\d\b\b\o\m\8\3\m\4\0\9\r\a\o\2\s\8\9\a\6\r\i\g\1\i\p\7\h\8\b\v\c\z\j\m\9\i\a\5\j\p\j\i\8\c\5\s\h\o\z\i\h\0\a\3\c\e\9\n\s\k\w\a\c\m\n\g\j\o\x\4\u\b\j\7\6\6\f\q\o\r\h\z\8\f\n\u\j\e\0\p\f\t\k\u\e\u\v\0\1\7\0\9\s\s\q\o\o\a\x\x ]] 00:31:10.641 13:52:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:31:10.641 13:52:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:31:10.641 [2024-11-20 13:52:07.848334] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:31:10.641 [2024-11-20 13:52:07.848423] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60553 ] 00:31:10.899 [2024-11-20 13:52:08.002700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:10.899 [2024-11-20 13:52:08.057914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:10.899 [2024-11-20 13:52:08.101834] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:10.899  [2024-11-20T13:52:08.482Z] Copying: 512/512 [B] (average 166 kBps) 00:31:11.159 00:31:11.159 ************************************ 00:31:11.159 END TEST dd_flags_misc 00:31:11.159 ************************************ 00:31:11.159 13:52:08 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ pk9paq8vff87npvqvmpyioesz7s2q42qssj175s448kjdqc2gf7964ddq3pgogcmds7rozz3bqgr9ekcywnpy6o54vmryu7tducz634da769hd5q2xgvlxa22lcppr3fdw0lw851p36d53a3ejc2tcaj2i0vse88bz8cpjw7dqpk7kb3uqxhseq84dnoxegni0uwqjaiw8dn2ytnt9of5qfd3r9v50mzdsxjujljpfipf9p30yg1ybg706lfysp3tk13syy5hrpt4bngrr2o1llxpiahb55ebhwve56teyzysek59iidfxdwkp0q3rtiaqf5hlopqg9jk54v6whzqusmrggkmstyc1u2nawsy79qvdu2ooduyjqp01o4254hy2swebn4b0yzrri6idbbom83m409rao2s89a6rig1ip7h8bvczjm9ia5jpji8c5shozih0a3ce9nskwacmngjox4ubj766fqorhz8fnuje0pftkueuv01709ssqooaxx == \p\k\9\p\a\q\8\v\f\f\8\7\n\p\v\q\v\m\p\y\i\o\e\s\z\7\s\2\q\4\2\q\s\s\j\1\7\5\s\4\4\8\k\j\d\q\c\2\g\f\7\9\6\4\d\d\q\3\p\g\o\g\c\m\d\s\7\r\o\z\z\3\b\q\g\r\9\e\k\c\y\w\n\p\y\6\o\5\4\v\m\r\y\u\7\t\d\u\c\z\6\3\4\d\a\7\6\9\h\d\5\q\2\x\g\v\l\x\a\2\2\l\c\p\p\r\3\f\d\w\0\l\w\8\5\1\p\3\6\d\5\3\a\3\e\j\c\2\t\c\a\j\2\i\0\v\s\e\8\8\b\z\8\c\p\j\w\7\d\q\p\k\7\k\b\3\u\q\x\h\s\e\q\8\4\d\n\o\x\e\g\n\i\0\u\w\q\j\a\i\w\8\d\n\2\y\t\n\t\9\o\f\5\q\f\d\3\r\9\v\5\0\m\z\d\s\x\j\u\j\l\j\p\f\i\p\f\9\p\3\0\y\g\1\y\b\g\7\0\6\l\f\y\s\p\3\t\k\1\3\s\y\y\5\h\r\p\t\4\b\n\g\r\r\2\o\1\l\l\x\p\i\a\h\b\5\5\e\b\h\w\v\e\5\6\t\e\y\z\y\s\e\k\5\9\i\i\d\f\x\d\w\k\p\0\q\3\r\t\i\a\q\f\5\h\l\o\p\q\g\9\j\k\5\4\v\6\w\h\z\q\u\s\m\r\g\g\k\m\s\t\y\c\1\u\2\n\a\w\s\y\7\9\q\v\d\u\2\o\o\d\u\y\j\q\p\0\1\o\4\2\5\4\h\y\2\s\w\e\b\n\4\b\0\y\z\r\r\i\6\i\d\b\b\o\m\8\3\m\4\0\9\r\a\o\2\s\8\9\a\6\r\i\g\1\i\p\7\h\8\b\v\c\z\j\m\9\i\a\5\j\p\j\i\8\c\5\s\h\o\z\i\h\0\a\3\c\e\9\n\s\k\w\a\c\m\n\g\j\o\x\4\u\b\j\7\6\6\f\q\o\r\h\z\8\f\n\u\j\e\0\p\f\t\k\u\e\u\v\0\1\7\0\9\s\s\q\o\o\a\x\x ]] 00:31:11.159 00:31:11.159 real 0m4.020s 00:31:11.159 user 0m2.236s 00:31:11.159 sys 0m1.846s 00:31:11.159 13:52:08 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:11.159 13:52:08 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:31:11.159 13:52:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:31:11.159 13:52:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:31:11.159 * Second test run, disabling liburing, forcing AIO 00:31:11.159 13:52:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:31:11.159 13:52:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:31:11.159 13:52:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:11.159 13:52:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:11.159 13:52:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:31:11.159 ************************************ 00:31:11.159 START TEST dd_flag_append_forced_aio 00:31:11.159 ************************************ 00:31:11.159 13:52:08 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1129 -- # append 00:31:11.159 13:52:08 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:31:11.159 13:52:08 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:31:11.159 13:52:08 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:31:11.159 13:52:08 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:31:11.159 13:52:08 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:31:11.159 13:52:08 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=wree3tms08meqg2sgrqk88hzigfca8rd 00:31:11.159 13:52:08 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:31:11.159 13:52:08 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:31:11.159 13:52:08 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:31:11.159 13:52:08 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=htghmxbxvedw3mzbfqwgwuiepo7v1jvo 00:31:11.159 13:52:08 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s wree3tms08meqg2sgrqk88hzigfca8rd 00:31:11.159 13:52:08 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s htghmxbxvedw3mzbfqwgwuiepo7v1jvo 00:31:11.159 13:52:08 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:31:11.159 [2024-11-20 13:52:08.422490] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:31:11.159 [2024-11-20 13:52:08.422646] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60586 ] 00:31:11.418 [2024-11-20 13:52:08.571443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:11.418 [2024-11-20 13:52:08.628513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:11.418 [2024-11-20 13:52:08.671855] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:11.418  [2024-11-20T13:52:09.000Z] Copying: 32/32 [B] (average 31 kBps) 00:31:11.677 00:31:11.677 ************************************ 00:31:11.677 END TEST dd_flag_append_forced_aio 00:31:11.677 13:52:08 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ htghmxbxvedw3mzbfqwgwuiepo7v1jvowree3tms08meqg2sgrqk88hzigfca8rd == \h\t\g\h\m\x\b\x\v\e\d\w\3\m\z\b\f\q\w\g\w\u\i\e\p\o\7\v\1\j\v\o\w\r\e\e\3\t\m\s\0\8\m\e\q\g\2\s\g\r\q\k\8\8\h\z\i\g\f\c\a\8\r\d ]] 00:31:11.677 00:31:11.677 real 0m0.520s 00:31:11.677 user 0m0.274s 00:31:11.677 sys 0m0.127s 00:31:11.677 13:52:08 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:11.677 13:52:08 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:31:11.677 ************************************ 00:31:11.677 13:52:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:31:11.677 13:52:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:11.677 13:52:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:11.677 13:52:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:31:11.677 ************************************ 00:31:11.677 START TEST dd_flag_directory_forced_aio 00:31:11.677 ************************************ 00:31:11.677 13:52:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1129 -- # directory 00:31:11.677 13:52:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:31:11.677 13:52:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:31:11.677 13:52:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:31:11.677 13:52:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:11.677 13:52:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:11.677 13:52:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:11.677 13:52:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:11.677 13:52:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:11.677 13:52:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:11.677 13:52:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:11.677 13:52:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:31:11.677 13:52:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:31:11.937 [2024-11-20 13:52:09.007965] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:31:11.937 [2024-11-20 13:52:09.008056] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60608 ] 00:31:11.937 [2024-11-20 13:52:09.158978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:11.937 [2024-11-20 13:52:09.212776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:11.937 [2024-11-20 13:52:09.256067] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:12.197 [2024-11-20 13:52:09.287782] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:31:12.197 [2024-11-20 13:52:09.287828] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:31:12.197 [2024-11-20 13:52:09.287843] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:12.197 [2024-11-20 13:52:09.386653] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:31:12.197 13:52:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:31:12.197 13:52:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:12.197 13:52:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:31:12.197 13:52:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:31:12.197 13:52:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:31:12.197 13:52:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:12.197 13:52:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:31:12.197 13:52:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:31:12.197 13:52:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:31:12.197 13:52:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:12.197 13:52:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:12.197 13:52:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:12.197 13:52:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:12.197 13:52:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:12.197 13:52:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:12.197 13:52:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:12.197 13:52:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:31:12.197 13:52:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:31:12.197 [2024-11-20 13:52:09.505800] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:31:12.197 [2024-11-20 13:52:09.505879] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60623 ] 00:31:12.455 [2024-11-20 13:52:09.656824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:12.455 [2024-11-20 13:52:09.716643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:12.455 [2024-11-20 13:52:09.757755] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:12.715 [2024-11-20 13:52:09.790843] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:31:12.715 [2024-11-20 13:52:09.791006] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:31:12.715 [2024-11-20 13:52:09.791023] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:12.715 [2024-11-20 13:52:09.888697] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:31:12.715 13:52:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:31:12.715 13:52:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:12.715 13:52:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:31:12.715 13:52:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:31:12.715 13:52:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:31:12.715 13:52:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:12.715 00:31:12.715 real 0m1.000s 00:31:12.715 user 0m0.540s 00:31:12.715 sys 0m0.250s 00:31:12.715 13:52:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:12.715 13:52:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:31:12.715 ************************************ 00:31:12.715 END TEST dd_flag_directory_forced_aio 00:31:12.715 ************************************ 00:31:12.715 13:52:09 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:31:12.715 13:52:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:12.715 13:52:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:12.715 13:52:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:31:12.715 ************************************ 00:31:12.715 START TEST dd_flag_nofollow_forced_aio 00:31:12.715 ************************************ 00:31:12.715 13:52:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1129 -- # nofollow 00:31:12.715 13:52:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:31:12.715 13:52:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:31:12.715 13:52:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:31:12.715 13:52:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:31:12.715 13:52:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:31:12.715 13:52:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:31:12.715 13:52:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:31:12.715 13:52:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:12.715 13:52:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:12.715 13:52:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:12.715 13:52:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:12.715 13:52:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:12.715 13:52:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:12.715 13:52:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:12.715 13:52:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:31:12.715 13:52:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:31:12.974 [2024-11-20 13:52:10.082354] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:31:12.974 [2024-11-20 13:52:10.082423] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60646 ] 00:31:12.974 [2024-11-20 13:52:10.232295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:12.974 [2024-11-20 13:52:10.281632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:13.234 [2024-11-20 13:52:10.323727] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:13.234 [2024-11-20 13:52:10.353605] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:31:13.234 [2024-11-20 13:52:10.353653] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:31:13.234 [2024-11-20 13:52:10.353665] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:13.234 [2024-11-20 13:52:10.447705] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:31:13.234 13:52:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:31:13.234 13:52:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:13.234 13:52:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:31:13.234 13:52:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:31:13.234 13:52:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:31:13.234 13:52:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:13.234 13:52:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:31:13.234 13:52:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:31:13.234 13:52:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:31:13.234 13:52:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:13.234 13:52:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:13.234 13:52:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:13.234 13:52:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:13.234 13:52:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:13.234 13:52:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:13.234 13:52:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:13.234 13:52:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:31:13.234 13:52:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:31:13.494 [2024-11-20 13:52:10.568852] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:31:13.494 [2024-11-20 13:52:10.568912] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60661 ] 00:31:13.494 [2024-11-20 13:52:10.718448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:13.494 [2024-11-20 13:52:10.768178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:13.494 [2024-11-20 13:52:10.809333] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:13.752 [2024-11-20 13:52:10.841077] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:31:13.752 [2024-11-20 13:52:10.841122] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:31:13.752 [2024-11-20 13:52:10.841134] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:13.752 [2024-11-20 13:52:10.937296] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:31:13.752 13:52:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:31:13.752 13:52:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:13.752 13:52:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:31:13.752 13:52:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:31:13.752 13:52:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:31:13.752 13:52:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:13.752 13:52:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:31:13.752 13:52:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:31:13.752 13:52:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:31:13.752 13:52:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:31:13.752 [2024-11-20 13:52:11.059979] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:31:13.752 [2024-11-20 13:52:11.060066] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60663 ] 00:31:14.010 [2024-11-20 13:52:11.208286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:14.010 [2024-11-20 13:52:11.265981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:14.010 [2024-11-20 13:52:11.308390] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:14.267  [2024-11-20T13:52:11.590Z] Copying: 512/512 [B] (average 500 kBps) 00:31:14.267 00:31:14.268 13:52:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ nd8zm142ys496yaxig800f33bd0j7hxoak1jlyvvs4tj7ozwm3ftq9vw5bi27a497vquvq2l5q33c5f1kx0p7t47p7kqa4d6xd4y8f1oetk42ihkhevrtronr0epk724hpgnp4ghcv32tx98bibr0qdnh0yzj632uynjgidm8duof0dwcdcuwm2n2n5ikaq0asu4i7fspzr0t8g98fbmeu6nvhpxr8k8bta80h0l382o6llx7x03k81d0a5n1boyo28wg42ahg6llkr9kbzrz0zudrwstcm13p2qwkf26mg933gc2n96o9k2u7mja7rzbqvqsm50oh68teofs4vwjd9f1jm5eiums3em0pexqr3zd6zlfas0soo45vkl9zr8t65vh3uj5hi8v74o6liixovioawhw5e5vw9uuc6jrcsk4xkybgeohhl1jwxc2qckc31pwyud3ylypp9d7rw3hwyw9ae9yszvahu7mmqk6p5yqkmn5djo0hgtp6sdy7cf == \n\d\8\z\m\1\4\2\y\s\4\9\6\y\a\x\i\g\8\0\0\f\3\3\b\d\0\j\7\h\x\o\a\k\1\j\l\y\v\v\s\4\t\j\7\o\z\w\m\3\f\t\q\9\v\w\5\b\i\2\7\a\4\9\7\v\q\u\v\q\2\l\5\q\3\3\c\5\f\1\k\x\0\p\7\t\4\7\p\7\k\q\a\4\d\6\x\d\4\y\8\f\1\o\e\t\k\4\2\i\h\k\h\e\v\r\t\r\o\n\r\0\e\p\k\7\2\4\h\p\g\n\p\4\g\h\c\v\3\2\t\x\9\8\b\i\b\r\0\q\d\n\h\0\y\z\j\6\3\2\u\y\n\j\g\i\d\m\8\d\u\o\f\0\d\w\c\d\c\u\w\m\2\n\2\n\5\i\k\a\q\0\a\s\u\4\i\7\f\s\p\z\r\0\t\8\g\9\8\f\b\m\e\u\6\n\v\h\p\x\r\8\k\8\b\t\a\8\0\h\0\l\3\8\2\o\6\l\l\x\7\x\0\3\k\8\1\d\0\a\5\n\1\b\o\y\o\2\8\w\g\4\2\a\h\g\6\l\l\k\r\9\k\b\z\r\z\0\z\u\d\r\w\s\t\c\m\1\3\p\2\q\w\k\f\2\6\m\g\9\3\3\g\c\2\n\9\6\o\9\k\2\u\7\m\j\a\7\r\z\b\q\v\q\s\m\5\0\o\h\6\8\t\e\o\f\s\4\v\w\j\d\9\f\1\j\m\5\e\i\u\m\s\3\e\m\0\p\e\x\q\r\3\z\d\6\z\l\f\a\s\0\s\o\o\4\5\v\k\l\9\z\r\8\t\6\5\v\h\3\u\j\5\h\i\8\v\7\4\o\6\l\i\i\x\o\v\i\o\a\w\h\w\5\e\5\v\w\9\u\u\c\6\j\r\c\s\k\4\x\k\y\b\g\e\o\h\h\l\1\j\w\x\c\2\q\c\k\c\3\1\p\w\y\u\d\3\y\l\y\p\p\9\d\7\r\w\3\h\w\y\w\9\a\e\9\y\s\z\v\a\h\u\7\m\m\q\k\6\p\5\y\q\k\m\n\5\d\j\o\0\h\g\t\p\6\s\d\y\7\c\f ]] 00:31:14.268 00:31:14.268 real 0m1.514s 00:31:14.268 user 0m0.829s 00:31:14.268 sys 0m0.357s 00:31:14.268 13:52:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:14.268 13:52:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:31:14.268 ************************************ 00:31:14.268 END TEST dd_flag_nofollow_forced_aio 00:31:14.268 ************************************ 00:31:14.268 13:52:11 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:31:14.268 13:52:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:14.268 13:52:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:14.268 13:52:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:31:14.268 ************************************ 00:31:14.268 START TEST dd_flag_noatime_forced_aio 00:31:14.268 ************************************ 00:31:14.268 13:52:11 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1129 -- # noatime 00:31:14.268 13:52:11 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:31:14.268 13:52:11 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:31:14.268 13:52:11 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:31:14.268 13:52:11 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:31:14.268 13:52:11 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:31:14.526 13:52:11 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:31:14.526 13:52:11 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1732110731 00:31:14.526 13:52:11 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:31:14.526 13:52:11 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1732110731 00:31:14.526 13:52:11 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:31:15.461 13:52:12 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:31:15.461 [2024-11-20 13:52:12.662561] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:31:15.461 [2024-11-20 13:52:12.662631] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60708 ] 00:31:15.721 [2024-11-20 13:52:12.811928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:15.721 [2024-11-20 13:52:12.864065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:15.721 [2024-11-20 13:52:12.906460] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:15.721  [2024-11-20T13:52:13.303Z] Copying: 512/512 [B] (average 500 kBps) 00:31:15.980 00:31:15.980 13:52:13 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:31:15.980 13:52:13 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1732110731 )) 00:31:15.980 13:52:13 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:31:15.980 13:52:13 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1732110731 )) 00:31:15.980 13:52:13 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:31:15.980 [2024-11-20 13:52:13.188986] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:31:15.980 [2024-11-20 13:52:13.189086] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60715 ] 00:31:16.239 [2024-11-20 13:52:13.339675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:16.239 [2024-11-20 13:52:13.396436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:16.239 [2024-11-20 13:52:13.439475] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:16.239  [2024-11-20T13:52:13.822Z] Copying: 512/512 [B] (average 500 kBps) 00:31:16.499 00:31:16.499 13:52:13 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:31:16.499 13:52:13 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1732110733 )) 00:31:16.499 00:31:16.499 real 0m2.070s 00:31:16.499 user 0m0.580s 00:31:16.499 sys 0m0.251s 00:31:16.499 13:52:13 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:16.499 ************************************ 00:31:16.499 END TEST dd_flag_noatime_forced_aio 00:31:16.499 13:52:13 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:31:16.499 ************************************ 00:31:16.499 13:52:13 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:31:16.499 13:52:13 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:16.499 13:52:13 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:16.499 13:52:13 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:31:16.499 ************************************ 00:31:16.499 START TEST dd_flags_misc_forced_aio 00:31:16.499 ************************************ 00:31:16.499 13:52:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1129 -- # io 00:31:16.499 13:52:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:31:16.499 13:52:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:31:16.499 13:52:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:31:16.499 13:52:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:31:16.499 13:52:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:31:16.499 13:52:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:31:16.499 13:52:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:31:16.499 13:52:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:31:16.499 13:52:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:31:16.499 [2024-11-20 13:52:13.769785] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:31:16.499 [2024-11-20 13:52:13.769852] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60747 ] 00:31:16.759 [2024-11-20 13:52:13.917985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:16.759 [2024-11-20 13:52:13.975833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:16.759 [2024-11-20 13:52:14.020531] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:16.759  [2024-11-20T13:52:14.342Z] Copying: 512/512 [B] (average 500 kBps) 00:31:17.019 00:31:17.019 13:52:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ tni5t33rujr26wx99qzcvqlfslm4fgdbtqsiw2ycgusu4rdjanbektsigg2k43wz854j5pmzjx0nfzu4cyr0mrpbwfjkshwe2txaaqfojjvu52fhvtcpcxo3c355ocmrz4u28e9mew0l103x3l7v9efkxquom64ge3owei3xzie346gv8v6shwq850m6qkl6jrhkumnr5id4d0uogwxfdgyks04iabtaj8yl1rjrvk59m5ccjmh192fd0xmre1aj0d0brww3ahwy5pttbk5bi65jxtseg5p72e4r5t2lpxlzuh1jzi3b9ti2bngbj0hetdwkymqg9fwatoadv1nfep5umdzgnainmm25of6907sjmz147au02efkbtq4oo9nkyx7dg24dnipq856n5dosdnk8jcsgvzcd4a30rejxu75atzhwtdzngp2xp5t76qv9knczossds2q8imp0c89wtc5oghgkgn7q07yyg1x888hwfnfx8nivq5gtq6cz8bk == \t\n\i\5\t\3\3\r\u\j\r\2\6\w\x\9\9\q\z\c\v\q\l\f\s\l\m\4\f\g\d\b\t\q\s\i\w\2\y\c\g\u\s\u\4\r\d\j\a\n\b\e\k\t\s\i\g\g\2\k\4\3\w\z\8\5\4\j\5\p\m\z\j\x\0\n\f\z\u\4\c\y\r\0\m\r\p\b\w\f\j\k\s\h\w\e\2\t\x\a\a\q\f\o\j\j\v\u\5\2\f\h\v\t\c\p\c\x\o\3\c\3\5\5\o\c\m\r\z\4\u\2\8\e\9\m\e\w\0\l\1\0\3\x\3\l\7\v\9\e\f\k\x\q\u\o\m\6\4\g\e\3\o\w\e\i\3\x\z\i\e\3\4\6\g\v\8\v\6\s\h\w\q\8\5\0\m\6\q\k\l\6\j\r\h\k\u\m\n\r\5\i\d\4\d\0\u\o\g\w\x\f\d\g\y\k\s\0\4\i\a\b\t\a\j\8\y\l\1\r\j\r\v\k\5\9\m\5\c\c\j\m\h\1\9\2\f\d\0\x\m\r\e\1\a\j\0\d\0\b\r\w\w\3\a\h\w\y\5\p\t\t\b\k\5\b\i\6\5\j\x\t\s\e\g\5\p\7\2\e\4\r\5\t\2\l\p\x\l\z\u\h\1\j\z\i\3\b\9\t\i\2\b\n\g\b\j\0\h\e\t\d\w\k\y\m\q\g\9\f\w\a\t\o\a\d\v\1\n\f\e\p\5\u\m\d\z\g\n\a\i\n\m\m\2\5\o\f\6\9\0\7\s\j\m\z\1\4\7\a\u\0\2\e\f\k\b\t\q\4\o\o\9\n\k\y\x\7\d\g\2\4\d\n\i\p\q\8\5\6\n\5\d\o\s\d\n\k\8\j\c\s\g\v\z\c\d\4\a\3\0\r\e\j\x\u\7\5\a\t\z\h\w\t\d\z\n\g\p\2\x\p\5\t\7\6\q\v\9\k\n\c\z\o\s\s\d\s\2\q\8\i\m\p\0\c\8\9\w\t\c\5\o\g\h\g\k\g\n\7\q\0\7\y\y\g\1\x\8\8\8\h\w\f\n\f\x\8\n\i\v\q\5\g\t\q\6\c\z\8\b\k ]] 00:31:17.019 13:52:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:31:17.019 13:52:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:31:17.019 [2024-11-20 13:52:14.272183] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:31:17.019 [2024-11-20 13:52:14.272353] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60749 ] 00:31:17.279 [2024-11-20 13:52:14.419511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:17.279 [2024-11-20 13:52:14.477314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:17.279 [2024-11-20 13:52:14.520110] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:17.279  [2024-11-20T13:52:14.861Z] Copying: 512/512 [B] (average 500 kBps) 00:31:17.538 00:31:17.539 13:52:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ tni5t33rujr26wx99qzcvqlfslm4fgdbtqsiw2ycgusu4rdjanbektsigg2k43wz854j5pmzjx0nfzu4cyr0mrpbwfjkshwe2txaaqfojjvu52fhvtcpcxo3c355ocmrz4u28e9mew0l103x3l7v9efkxquom64ge3owei3xzie346gv8v6shwq850m6qkl6jrhkumnr5id4d0uogwxfdgyks04iabtaj8yl1rjrvk59m5ccjmh192fd0xmre1aj0d0brww3ahwy5pttbk5bi65jxtseg5p72e4r5t2lpxlzuh1jzi3b9ti2bngbj0hetdwkymqg9fwatoadv1nfep5umdzgnainmm25of6907sjmz147au02efkbtq4oo9nkyx7dg24dnipq856n5dosdnk8jcsgvzcd4a30rejxu75atzhwtdzngp2xp5t76qv9knczossds2q8imp0c89wtc5oghgkgn7q07yyg1x888hwfnfx8nivq5gtq6cz8bk == \t\n\i\5\t\3\3\r\u\j\r\2\6\w\x\9\9\q\z\c\v\q\l\f\s\l\m\4\f\g\d\b\t\q\s\i\w\2\y\c\g\u\s\u\4\r\d\j\a\n\b\e\k\t\s\i\g\g\2\k\4\3\w\z\8\5\4\j\5\p\m\z\j\x\0\n\f\z\u\4\c\y\r\0\m\r\p\b\w\f\j\k\s\h\w\e\2\t\x\a\a\q\f\o\j\j\v\u\5\2\f\h\v\t\c\p\c\x\o\3\c\3\5\5\o\c\m\r\z\4\u\2\8\e\9\m\e\w\0\l\1\0\3\x\3\l\7\v\9\e\f\k\x\q\u\o\m\6\4\g\e\3\o\w\e\i\3\x\z\i\e\3\4\6\g\v\8\v\6\s\h\w\q\8\5\0\m\6\q\k\l\6\j\r\h\k\u\m\n\r\5\i\d\4\d\0\u\o\g\w\x\f\d\g\y\k\s\0\4\i\a\b\t\a\j\8\y\l\1\r\j\r\v\k\5\9\m\5\c\c\j\m\h\1\9\2\f\d\0\x\m\r\e\1\a\j\0\d\0\b\r\w\w\3\a\h\w\y\5\p\t\t\b\k\5\b\i\6\5\j\x\t\s\e\g\5\p\7\2\e\4\r\5\t\2\l\p\x\l\z\u\h\1\j\z\i\3\b\9\t\i\2\b\n\g\b\j\0\h\e\t\d\w\k\y\m\q\g\9\f\w\a\t\o\a\d\v\1\n\f\e\p\5\u\m\d\z\g\n\a\i\n\m\m\2\5\o\f\6\9\0\7\s\j\m\z\1\4\7\a\u\0\2\e\f\k\b\t\q\4\o\o\9\n\k\y\x\7\d\g\2\4\d\n\i\p\q\8\5\6\n\5\d\o\s\d\n\k\8\j\c\s\g\v\z\c\d\4\a\3\0\r\e\j\x\u\7\5\a\t\z\h\w\t\d\z\n\g\p\2\x\p\5\t\7\6\q\v\9\k\n\c\z\o\s\s\d\s\2\q\8\i\m\p\0\c\8\9\w\t\c\5\o\g\h\g\k\g\n\7\q\0\7\y\y\g\1\x\8\8\8\h\w\f\n\f\x\8\n\i\v\q\5\g\t\q\6\c\z\8\b\k ]] 00:31:17.539 13:52:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:31:17.539 13:52:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:31:17.539 [2024-11-20 13:52:14.783071] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:31:17.539 [2024-11-20 13:52:14.783143] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60761 ] 00:31:17.799 [2024-11-20 13:52:14.932738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:17.799 [2024-11-20 13:52:14.991406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:17.799 [2024-11-20 13:52:15.034656] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:17.799  [2024-11-20T13:52:15.382Z] Copying: 512/512 [B] (average 100 kBps) 00:31:18.059 00:31:18.060 13:52:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ tni5t33rujr26wx99qzcvqlfslm4fgdbtqsiw2ycgusu4rdjanbektsigg2k43wz854j5pmzjx0nfzu4cyr0mrpbwfjkshwe2txaaqfojjvu52fhvtcpcxo3c355ocmrz4u28e9mew0l103x3l7v9efkxquom64ge3owei3xzie346gv8v6shwq850m6qkl6jrhkumnr5id4d0uogwxfdgyks04iabtaj8yl1rjrvk59m5ccjmh192fd0xmre1aj0d0brww3ahwy5pttbk5bi65jxtseg5p72e4r5t2lpxlzuh1jzi3b9ti2bngbj0hetdwkymqg9fwatoadv1nfep5umdzgnainmm25of6907sjmz147au02efkbtq4oo9nkyx7dg24dnipq856n5dosdnk8jcsgvzcd4a30rejxu75atzhwtdzngp2xp5t76qv9knczossds2q8imp0c89wtc5oghgkgn7q07yyg1x888hwfnfx8nivq5gtq6cz8bk == \t\n\i\5\t\3\3\r\u\j\r\2\6\w\x\9\9\q\z\c\v\q\l\f\s\l\m\4\f\g\d\b\t\q\s\i\w\2\y\c\g\u\s\u\4\r\d\j\a\n\b\e\k\t\s\i\g\g\2\k\4\3\w\z\8\5\4\j\5\p\m\z\j\x\0\n\f\z\u\4\c\y\r\0\m\r\p\b\w\f\j\k\s\h\w\e\2\t\x\a\a\q\f\o\j\j\v\u\5\2\f\h\v\t\c\p\c\x\o\3\c\3\5\5\o\c\m\r\z\4\u\2\8\e\9\m\e\w\0\l\1\0\3\x\3\l\7\v\9\e\f\k\x\q\u\o\m\6\4\g\e\3\o\w\e\i\3\x\z\i\e\3\4\6\g\v\8\v\6\s\h\w\q\8\5\0\m\6\q\k\l\6\j\r\h\k\u\m\n\r\5\i\d\4\d\0\u\o\g\w\x\f\d\g\y\k\s\0\4\i\a\b\t\a\j\8\y\l\1\r\j\r\v\k\5\9\m\5\c\c\j\m\h\1\9\2\f\d\0\x\m\r\e\1\a\j\0\d\0\b\r\w\w\3\a\h\w\y\5\p\t\t\b\k\5\b\i\6\5\j\x\t\s\e\g\5\p\7\2\e\4\r\5\t\2\l\p\x\l\z\u\h\1\j\z\i\3\b\9\t\i\2\b\n\g\b\j\0\h\e\t\d\w\k\y\m\q\g\9\f\w\a\t\o\a\d\v\1\n\f\e\p\5\u\m\d\z\g\n\a\i\n\m\m\2\5\o\f\6\9\0\7\s\j\m\z\1\4\7\a\u\0\2\e\f\k\b\t\q\4\o\o\9\n\k\y\x\7\d\g\2\4\d\n\i\p\q\8\5\6\n\5\d\o\s\d\n\k\8\j\c\s\g\v\z\c\d\4\a\3\0\r\e\j\x\u\7\5\a\t\z\h\w\t\d\z\n\g\p\2\x\p\5\t\7\6\q\v\9\k\n\c\z\o\s\s\d\s\2\q\8\i\m\p\0\c\8\9\w\t\c\5\o\g\h\g\k\g\n\7\q\0\7\y\y\g\1\x\8\8\8\h\w\f\n\f\x\8\n\i\v\q\5\g\t\q\6\c\z\8\b\k ]] 00:31:18.060 13:52:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:31:18.060 13:52:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:31:18.060 [2024-11-20 13:52:15.311789] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:31:18.060 [2024-11-20 13:52:15.311878] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60764 ] 00:31:18.320 [2024-11-20 13:52:15.461333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:18.320 [2024-11-20 13:52:15.519505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:18.320 [2024-11-20 13:52:15.562566] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:18.320  [2024-11-20T13:52:15.904Z] Copying: 512/512 [B] (average 500 kBps) 00:31:18.581 00:31:18.581 13:52:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ tni5t33rujr26wx99qzcvqlfslm4fgdbtqsiw2ycgusu4rdjanbektsigg2k43wz854j5pmzjx0nfzu4cyr0mrpbwfjkshwe2txaaqfojjvu52fhvtcpcxo3c355ocmrz4u28e9mew0l103x3l7v9efkxquom64ge3owei3xzie346gv8v6shwq850m6qkl6jrhkumnr5id4d0uogwxfdgyks04iabtaj8yl1rjrvk59m5ccjmh192fd0xmre1aj0d0brww3ahwy5pttbk5bi65jxtseg5p72e4r5t2lpxlzuh1jzi3b9ti2bngbj0hetdwkymqg9fwatoadv1nfep5umdzgnainmm25of6907sjmz147au02efkbtq4oo9nkyx7dg24dnipq856n5dosdnk8jcsgvzcd4a30rejxu75atzhwtdzngp2xp5t76qv9knczossds2q8imp0c89wtc5oghgkgn7q07yyg1x888hwfnfx8nivq5gtq6cz8bk == \t\n\i\5\t\3\3\r\u\j\r\2\6\w\x\9\9\q\z\c\v\q\l\f\s\l\m\4\f\g\d\b\t\q\s\i\w\2\y\c\g\u\s\u\4\r\d\j\a\n\b\e\k\t\s\i\g\g\2\k\4\3\w\z\8\5\4\j\5\p\m\z\j\x\0\n\f\z\u\4\c\y\r\0\m\r\p\b\w\f\j\k\s\h\w\e\2\t\x\a\a\q\f\o\j\j\v\u\5\2\f\h\v\t\c\p\c\x\o\3\c\3\5\5\o\c\m\r\z\4\u\2\8\e\9\m\e\w\0\l\1\0\3\x\3\l\7\v\9\e\f\k\x\q\u\o\m\6\4\g\e\3\o\w\e\i\3\x\z\i\e\3\4\6\g\v\8\v\6\s\h\w\q\8\5\0\m\6\q\k\l\6\j\r\h\k\u\m\n\r\5\i\d\4\d\0\u\o\g\w\x\f\d\g\y\k\s\0\4\i\a\b\t\a\j\8\y\l\1\r\j\r\v\k\5\9\m\5\c\c\j\m\h\1\9\2\f\d\0\x\m\r\e\1\a\j\0\d\0\b\r\w\w\3\a\h\w\y\5\p\t\t\b\k\5\b\i\6\5\j\x\t\s\e\g\5\p\7\2\e\4\r\5\t\2\l\p\x\l\z\u\h\1\j\z\i\3\b\9\t\i\2\b\n\g\b\j\0\h\e\t\d\w\k\y\m\q\g\9\f\w\a\t\o\a\d\v\1\n\f\e\p\5\u\m\d\z\g\n\a\i\n\m\m\2\5\o\f\6\9\0\7\s\j\m\z\1\4\7\a\u\0\2\e\f\k\b\t\q\4\o\o\9\n\k\y\x\7\d\g\2\4\d\n\i\p\q\8\5\6\n\5\d\o\s\d\n\k\8\j\c\s\g\v\z\c\d\4\a\3\0\r\e\j\x\u\7\5\a\t\z\h\w\t\d\z\n\g\p\2\x\p\5\t\7\6\q\v\9\k\n\c\z\o\s\s\d\s\2\q\8\i\m\p\0\c\8\9\w\t\c\5\o\g\h\g\k\g\n\7\q\0\7\y\y\g\1\x\8\8\8\h\w\f\n\f\x\8\n\i\v\q\5\g\t\q\6\c\z\8\b\k ]] 00:31:18.581 13:52:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:31:18.581 13:52:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:31:18.581 13:52:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:31:18.581 13:52:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:31:18.581 13:52:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:31:18.581 13:52:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:31:18.581 [2024-11-20 13:52:15.839849] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:31:18.581 [2024-11-20 13:52:15.840006] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60776 ] 00:31:18.841 [2024-11-20 13:52:15.991791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:18.841 [2024-11-20 13:52:16.054353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:18.841 [2024-11-20 13:52:16.097928] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:18.841  [2024-11-20T13:52:16.424Z] Copying: 512/512 [B] (average 500 kBps) 00:31:19.101 00:31:19.101 13:52:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 5wuy27zq1zs00yd3ksahrhr2vjgshhjqcwp7inqi0mgkqsyqx3atky6xee4qn97v4e3wzor3iuewfrmcjxosayyniaa13suik3c2fmpwixs6encfrd1v91w1xmhavw8w9n23x20bzo1dg7nnjpedb5pu8t421u5asn0ityfnu9fcsps60r24jha3bqgejka9zgwovw3krn1e7zb40ozfz095n2giuugzi89lwbu66hyskkcy6757w8dknlsuwqki5mulbp4p155cmgyhihqazv1712zbdu66nl2u2vt7srpwojka5irkaphx2gaqs6jssb8zsgdbyu2143nahltjict386brtrmawo52ai5j4jjl2tsa46px3vzynvjoam0s9p5c43yk27jt6pmk12q6901icbimef4pw2av2jphu9k2vpp1mzv8tyzejy0iydyzbiqjgj9zutnxxslc5uf8k7cbas4r9gyoyct83h4kq0lj796w2no8hdv7abiabm1y == \5\w\u\y\2\7\z\q\1\z\s\0\0\y\d\3\k\s\a\h\r\h\r\2\v\j\g\s\h\h\j\q\c\w\p\7\i\n\q\i\0\m\g\k\q\s\y\q\x\3\a\t\k\y\6\x\e\e\4\q\n\9\7\v\4\e\3\w\z\o\r\3\i\u\e\w\f\r\m\c\j\x\o\s\a\y\y\n\i\a\a\1\3\s\u\i\k\3\c\2\f\m\p\w\i\x\s\6\e\n\c\f\r\d\1\v\9\1\w\1\x\m\h\a\v\w\8\w\9\n\2\3\x\2\0\b\z\o\1\d\g\7\n\n\j\p\e\d\b\5\p\u\8\t\4\2\1\u\5\a\s\n\0\i\t\y\f\n\u\9\f\c\s\p\s\6\0\r\2\4\j\h\a\3\b\q\g\e\j\k\a\9\z\g\w\o\v\w\3\k\r\n\1\e\7\z\b\4\0\o\z\f\z\0\9\5\n\2\g\i\u\u\g\z\i\8\9\l\w\b\u\6\6\h\y\s\k\k\c\y\6\7\5\7\w\8\d\k\n\l\s\u\w\q\k\i\5\m\u\l\b\p\4\p\1\5\5\c\m\g\y\h\i\h\q\a\z\v\1\7\1\2\z\b\d\u\6\6\n\l\2\u\2\v\t\7\s\r\p\w\o\j\k\a\5\i\r\k\a\p\h\x\2\g\a\q\s\6\j\s\s\b\8\z\s\g\d\b\y\u\2\1\4\3\n\a\h\l\t\j\i\c\t\3\8\6\b\r\t\r\m\a\w\o\5\2\a\i\5\j\4\j\j\l\2\t\s\a\4\6\p\x\3\v\z\y\n\v\j\o\a\m\0\s\9\p\5\c\4\3\y\k\2\7\j\t\6\p\m\k\1\2\q\6\9\0\1\i\c\b\i\m\e\f\4\p\w\2\a\v\2\j\p\h\u\9\k\2\v\p\p\1\m\z\v\8\t\y\z\e\j\y\0\i\y\d\y\z\b\i\q\j\g\j\9\z\u\t\n\x\x\s\l\c\5\u\f\8\k\7\c\b\a\s\4\r\9\g\y\o\y\c\t\8\3\h\4\k\q\0\l\j\7\9\6\w\2\n\o\8\h\d\v\7\a\b\i\a\b\m\1\y ]] 00:31:19.101 13:52:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:31:19.102 13:52:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:31:19.102 [2024-11-20 13:52:16.383502] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:31:19.102 [2024-11-20 13:52:16.383661] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60779 ] 00:31:19.360 [2024-11-20 13:52:16.536526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:19.360 [2024-11-20 13:52:16.594790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:19.360 [2024-11-20 13:52:16.637991] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:19.360  [2024-11-20T13:52:16.942Z] Copying: 512/512 [B] (average 500 kBps) 00:31:19.619 00:31:19.619 13:52:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 5wuy27zq1zs00yd3ksahrhr2vjgshhjqcwp7inqi0mgkqsyqx3atky6xee4qn97v4e3wzor3iuewfrmcjxosayyniaa13suik3c2fmpwixs6encfrd1v91w1xmhavw8w9n23x20bzo1dg7nnjpedb5pu8t421u5asn0ityfnu9fcsps60r24jha3bqgejka9zgwovw3krn1e7zb40ozfz095n2giuugzi89lwbu66hyskkcy6757w8dknlsuwqki5mulbp4p155cmgyhihqazv1712zbdu66nl2u2vt7srpwojka5irkaphx2gaqs6jssb8zsgdbyu2143nahltjict386brtrmawo52ai5j4jjl2tsa46px3vzynvjoam0s9p5c43yk27jt6pmk12q6901icbimef4pw2av2jphu9k2vpp1mzv8tyzejy0iydyzbiqjgj9zutnxxslc5uf8k7cbas4r9gyoyct83h4kq0lj796w2no8hdv7abiabm1y == \5\w\u\y\2\7\z\q\1\z\s\0\0\y\d\3\k\s\a\h\r\h\r\2\v\j\g\s\h\h\j\q\c\w\p\7\i\n\q\i\0\m\g\k\q\s\y\q\x\3\a\t\k\y\6\x\e\e\4\q\n\9\7\v\4\e\3\w\z\o\r\3\i\u\e\w\f\r\m\c\j\x\o\s\a\y\y\n\i\a\a\1\3\s\u\i\k\3\c\2\f\m\p\w\i\x\s\6\e\n\c\f\r\d\1\v\9\1\w\1\x\m\h\a\v\w\8\w\9\n\2\3\x\2\0\b\z\o\1\d\g\7\n\n\j\p\e\d\b\5\p\u\8\t\4\2\1\u\5\a\s\n\0\i\t\y\f\n\u\9\f\c\s\p\s\6\0\r\2\4\j\h\a\3\b\q\g\e\j\k\a\9\z\g\w\o\v\w\3\k\r\n\1\e\7\z\b\4\0\o\z\f\z\0\9\5\n\2\g\i\u\u\g\z\i\8\9\l\w\b\u\6\6\h\y\s\k\k\c\y\6\7\5\7\w\8\d\k\n\l\s\u\w\q\k\i\5\m\u\l\b\p\4\p\1\5\5\c\m\g\y\h\i\h\q\a\z\v\1\7\1\2\z\b\d\u\6\6\n\l\2\u\2\v\t\7\s\r\p\w\o\j\k\a\5\i\r\k\a\p\h\x\2\g\a\q\s\6\j\s\s\b\8\z\s\g\d\b\y\u\2\1\4\3\n\a\h\l\t\j\i\c\t\3\8\6\b\r\t\r\m\a\w\o\5\2\a\i\5\j\4\j\j\l\2\t\s\a\4\6\p\x\3\v\z\y\n\v\j\o\a\m\0\s\9\p\5\c\4\3\y\k\2\7\j\t\6\p\m\k\1\2\q\6\9\0\1\i\c\b\i\m\e\f\4\p\w\2\a\v\2\j\p\h\u\9\k\2\v\p\p\1\m\z\v\8\t\y\z\e\j\y\0\i\y\d\y\z\b\i\q\j\g\j\9\z\u\t\n\x\x\s\l\c\5\u\f\8\k\7\c\b\a\s\4\r\9\g\y\o\y\c\t\8\3\h\4\k\q\0\l\j\7\9\6\w\2\n\o\8\h\d\v\7\a\b\i\a\b\m\1\y ]] 00:31:19.619 13:52:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:31:19.619 13:52:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:31:19.619 [2024-11-20 13:52:16.901629] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:31:19.619 [2024-11-20 13:52:16.901702] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60791 ] 00:31:19.878 [2024-11-20 13:52:17.036303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:19.878 [2024-11-20 13:52:17.093254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:19.878 [2024-11-20 13:52:17.135852] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:19.878  [2024-11-20T13:52:17.460Z] Copying: 512/512 [B] (average 125 kBps) 00:31:20.137 00:31:20.137 13:52:17 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 5wuy27zq1zs00yd3ksahrhr2vjgshhjqcwp7inqi0mgkqsyqx3atky6xee4qn97v4e3wzor3iuewfrmcjxosayyniaa13suik3c2fmpwixs6encfrd1v91w1xmhavw8w9n23x20bzo1dg7nnjpedb5pu8t421u5asn0ityfnu9fcsps60r24jha3bqgejka9zgwovw3krn1e7zb40ozfz095n2giuugzi89lwbu66hyskkcy6757w8dknlsuwqki5mulbp4p155cmgyhihqazv1712zbdu66nl2u2vt7srpwojka5irkaphx2gaqs6jssb8zsgdbyu2143nahltjict386brtrmawo52ai5j4jjl2tsa46px3vzynvjoam0s9p5c43yk27jt6pmk12q6901icbimef4pw2av2jphu9k2vpp1mzv8tyzejy0iydyzbiqjgj9zutnxxslc5uf8k7cbas4r9gyoyct83h4kq0lj796w2no8hdv7abiabm1y == \5\w\u\y\2\7\z\q\1\z\s\0\0\y\d\3\k\s\a\h\r\h\r\2\v\j\g\s\h\h\j\q\c\w\p\7\i\n\q\i\0\m\g\k\q\s\y\q\x\3\a\t\k\y\6\x\e\e\4\q\n\9\7\v\4\e\3\w\z\o\r\3\i\u\e\w\f\r\m\c\j\x\o\s\a\y\y\n\i\a\a\1\3\s\u\i\k\3\c\2\f\m\p\w\i\x\s\6\e\n\c\f\r\d\1\v\9\1\w\1\x\m\h\a\v\w\8\w\9\n\2\3\x\2\0\b\z\o\1\d\g\7\n\n\j\p\e\d\b\5\p\u\8\t\4\2\1\u\5\a\s\n\0\i\t\y\f\n\u\9\f\c\s\p\s\6\0\r\2\4\j\h\a\3\b\q\g\e\j\k\a\9\z\g\w\o\v\w\3\k\r\n\1\e\7\z\b\4\0\o\z\f\z\0\9\5\n\2\g\i\u\u\g\z\i\8\9\l\w\b\u\6\6\h\y\s\k\k\c\y\6\7\5\7\w\8\d\k\n\l\s\u\w\q\k\i\5\m\u\l\b\p\4\p\1\5\5\c\m\g\y\h\i\h\q\a\z\v\1\7\1\2\z\b\d\u\6\6\n\l\2\u\2\v\t\7\s\r\p\w\o\j\k\a\5\i\r\k\a\p\h\x\2\g\a\q\s\6\j\s\s\b\8\z\s\g\d\b\y\u\2\1\4\3\n\a\h\l\t\j\i\c\t\3\8\6\b\r\t\r\m\a\w\o\5\2\a\i\5\j\4\j\j\l\2\t\s\a\4\6\p\x\3\v\z\y\n\v\j\o\a\m\0\s\9\p\5\c\4\3\y\k\2\7\j\t\6\p\m\k\1\2\q\6\9\0\1\i\c\b\i\m\e\f\4\p\w\2\a\v\2\j\p\h\u\9\k\2\v\p\p\1\m\z\v\8\t\y\z\e\j\y\0\i\y\d\y\z\b\i\q\j\g\j\9\z\u\t\n\x\x\s\l\c\5\u\f\8\k\7\c\b\a\s\4\r\9\g\y\o\y\c\t\8\3\h\4\k\q\0\l\j\7\9\6\w\2\n\o\8\h\d\v\7\a\b\i\a\b\m\1\y ]] 00:31:20.137 13:52:17 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:31:20.137 13:52:17 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:31:20.137 [2024-11-20 13:52:17.415679] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:31:20.137 [2024-11-20 13:52:17.415763] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60794 ] 00:31:20.396 [2024-11-20 13:52:17.566307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:20.396 [2024-11-20 13:52:17.627892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:20.396 [2024-11-20 13:52:17.674449] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:20.396  [2024-11-20T13:52:17.978Z] Copying: 512/512 [B] (average 500 kBps) 00:31:20.655 00:31:20.655 13:52:17 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 5wuy27zq1zs00yd3ksahrhr2vjgshhjqcwp7inqi0mgkqsyqx3atky6xee4qn97v4e3wzor3iuewfrmcjxosayyniaa13suik3c2fmpwixs6encfrd1v91w1xmhavw8w9n23x20bzo1dg7nnjpedb5pu8t421u5asn0ityfnu9fcsps60r24jha3bqgejka9zgwovw3krn1e7zb40ozfz095n2giuugzi89lwbu66hyskkcy6757w8dknlsuwqki5mulbp4p155cmgyhihqazv1712zbdu66nl2u2vt7srpwojka5irkaphx2gaqs6jssb8zsgdbyu2143nahltjict386brtrmawo52ai5j4jjl2tsa46px3vzynvjoam0s9p5c43yk27jt6pmk12q6901icbimef4pw2av2jphu9k2vpp1mzv8tyzejy0iydyzbiqjgj9zutnxxslc5uf8k7cbas4r9gyoyct83h4kq0lj796w2no8hdv7abiabm1y == \5\w\u\y\2\7\z\q\1\z\s\0\0\y\d\3\k\s\a\h\r\h\r\2\v\j\g\s\h\h\j\q\c\w\p\7\i\n\q\i\0\m\g\k\q\s\y\q\x\3\a\t\k\y\6\x\e\e\4\q\n\9\7\v\4\e\3\w\z\o\r\3\i\u\e\w\f\r\m\c\j\x\o\s\a\y\y\n\i\a\a\1\3\s\u\i\k\3\c\2\f\m\p\w\i\x\s\6\e\n\c\f\r\d\1\v\9\1\w\1\x\m\h\a\v\w\8\w\9\n\2\3\x\2\0\b\z\o\1\d\g\7\n\n\j\p\e\d\b\5\p\u\8\t\4\2\1\u\5\a\s\n\0\i\t\y\f\n\u\9\f\c\s\p\s\6\0\r\2\4\j\h\a\3\b\q\g\e\j\k\a\9\z\g\w\o\v\w\3\k\r\n\1\e\7\z\b\4\0\o\z\f\z\0\9\5\n\2\g\i\u\u\g\z\i\8\9\l\w\b\u\6\6\h\y\s\k\k\c\y\6\7\5\7\w\8\d\k\n\l\s\u\w\q\k\i\5\m\u\l\b\p\4\p\1\5\5\c\m\g\y\h\i\h\q\a\z\v\1\7\1\2\z\b\d\u\6\6\n\l\2\u\2\v\t\7\s\r\p\w\o\j\k\a\5\i\r\k\a\p\h\x\2\g\a\q\s\6\j\s\s\b\8\z\s\g\d\b\y\u\2\1\4\3\n\a\h\l\t\j\i\c\t\3\8\6\b\r\t\r\m\a\w\o\5\2\a\i\5\j\4\j\j\l\2\t\s\a\4\6\p\x\3\v\z\y\n\v\j\o\a\m\0\s\9\p\5\c\4\3\y\k\2\7\j\t\6\p\m\k\1\2\q\6\9\0\1\i\c\b\i\m\e\f\4\p\w\2\a\v\2\j\p\h\u\9\k\2\v\p\p\1\m\z\v\8\t\y\z\e\j\y\0\i\y\d\y\z\b\i\q\j\g\j\9\z\u\t\n\x\x\s\l\c\5\u\f\8\k\7\c\b\a\s\4\r\9\g\y\o\y\c\t\8\3\h\4\k\q\0\l\j\7\9\6\w\2\n\o\8\h\d\v\7\a\b\i\a\b\m\1\y ]] 00:31:20.655 00:31:20.655 real 0m4.195s 00:31:20.655 user 0m2.249s 00:31:20.655 sys 0m0.975s 00:31:20.655 13:52:17 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:20.655 13:52:17 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:31:20.655 ************************************ 00:31:20.655 END TEST dd_flags_misc_forced_aio 00:31:20.655 ************************************ 00:31:20.655 13:52:17 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:31:20.655 13:52:17 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:31:20.655 13:52:17 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:31:20.655 00:31:20.655 real 0m19.168s 00:31:20.655 user 0m9.203s 00:31:20.655 sys 0m5.764s 00:31:20.655 13:52:17 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:20.655 13:52:17 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:31:20.655 ************************************ 00:31:20.655 END TEST spdk_dd_posix 00:31:20.655 ************************************ 00:31:20.914 13:52:18 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:31:20.914 13:52:18 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:20.914 13:52:18 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:20.914 13:52:18 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:31:20.914 ************************************ 00:31:20.914 START TEST spdk_dd_malloc 00:31:20.914 ************************************ 00:31:20.914 13:52:18 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:31:20.914 * Looking for test storage... 00:31:20.914 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:31:20.914 13:52:18 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:20.914 13:52:18 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # lcov --version 00:31:20.914 13:52:18 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:21.173 13:52:18 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:21.173 13:52:18 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:21.173 13:52:18 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:21.173 13:52:18 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:21.173 13:52:18 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:31:21.173 13:52:18 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:31:21.173 13:52:18 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:31:21.173 13:52:18 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:31:21.173 13:52:18 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:31:21.173 13:52:18 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:31:21.173 13:52:18 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:31:21.173 13:52:18 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:21.173 13:52:18 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:31:21.173 13:52:18 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:31:21.173 13:52:18 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:21.173 13:52:18 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:21.173 13:52:18 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:31:21.173 13:52:18 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:31:21.173 13:52:18 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:21.173 13:52:18 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:31:21.173 13:52:18 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:31:21.173 13:52:18 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:31:21.173 13:52:18 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:31:21.173 13:52:18 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:21.173 13:52:18 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:31:21.173 13:52:18 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:31:21.173 13:52:18 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:21.173 13:52:18 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:21.173 13:52:18 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:31:21.173 13:52:18 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:21.173 13:52:18 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:21.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:21.173 --rc genhtml_branch_coverage=1 00:31:21.173 --rc genhtml_function_coverage=1 00:31:21.173 --rc genhtml_legend=1 00:31:21.173 --rc geninfo_all_blocks=1 00:31:21.173 --rc geninfo_unexecuted_blocks=1 00:31:21.173 00:31:21.173 ' 00:31:21.173 13:52:18 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:21.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:21.173 --rc genhtml_branch_coverage=1 00:31:21.173 --rc genhtml_function_coverage=1 00:31:21.173 --rc genhtml_legend=1 00:31:21.173 --rc geninfo_all_blocks=1 00:31:21.173 --rc geninfo_unexecuted_blocks=1 00:31:21.173 00:31:21.173 ' 00:31:21.173 13:52:18 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:21.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:21.173 --rc genhtml_branch_coverage=1 00:31:21.173 --rc genhtml_function_coverage=1 00:31:21.173 --rc genhtml_legend=1 00:31:21.173 --rc geninfo_all_blocks=1 00:31:21.173 --rc geninfo_unexecuted_blocks=1 00:31:21.173 00:31:21.173 ' 00:31:21.173 13:52:18 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:21.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:21.173 --rc genhtml_branch_coverage=1 00:31:21.173 --rc genhtml_function_coverage=1 00:31:21.173 --rc genhtml_legend=1 00:31:21.173 --rc geninfo_all_blocks=1 00:31:21.173 --rc geninfo_unexecuted_blocks=1 00:31:21.173 00:31:21.173 ' 00:31:21.173 13:52:18 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:21.173 13:52:18 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:31:21.173 13:52:18 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:21.173 13:52:18 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:21.173 13:52:18 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:21.174 13:52:18 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.174 13:52:18 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.174 13:52:18 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.174 13:52:18 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:31:21.174 13:52:18 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.174 13:52:18 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:31:21.174 13:52:18 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:21.174 13:52:18 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:21.174 13:52:18 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:31:21.174 ************************************ 00:31:21.174 START TEST dd_malloc_copy 00:31:21.174 ************************************ 00:31:21.174 13:52:18 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1129 -- # malloc_copy 00:31:21.174 13:52:18 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:31:21.174 13:52:18 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:31:21.174 13:52:18 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:31:21.174 13:52:18 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:31:21.174 13:52:18 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:31:21.174 13:52:18 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:31:21.174 13:52:18 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:31:21.174 13:52:18 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:31:21.174 13:52:18 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:31:21.174 13:52:18 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:31:21.174 [2024-11-20 13:52:18.340450] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:31:21.174 [2024-11-20 13:52:18.340521] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60876 ] 00:31:21.174 { 00:31:21.174 "subsystems": [ 00:31:21.174 { 00:31:21.174 "subsystem": "bdev", 00:31:21.174 "config": [ 00:31:21.174 { 00:31:21.174 "params": { 00:31:21.174 "block_size": 512, 00:31:21.174 "num_blocks": 1048576, 00:31:21.174 "name": "malloc0" 00:31:21.174 }, 00:31:21.174 "method": "bdev_malloc_create" 00:31:21.174 }, 00:31:21.174 { 00:31:21.174 "params": { 00:31:21.174 "block_size": 512, 00:31:21.174 "num_blocks": 1048576, 00:31:21.174 "name": "malloc1" 00:31:21.174 }, 00:31:21.174 "method": "bdev_malloc_create" 00:31:21.174 }, 00:31:21.174 { 00:31:21.174 "method": "bdev_wait_for_examine" 00:31:21.174 } 00:31:21.174 ] 00:31:21.174 } 00:31:21.174 ] 00:31:21.174 } 00:31:21.174 [2024-11-20 13:52:18.489580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:21.433 [2024-11-20 13:52:18.548131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:21.433 [2024-11-20 13:52:18.591852] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:22.817  [2024-11-20T13:52:21.078Z] Copying: 203/512 [MB] (203 MBps) [2024-11-20T13:52:21.646Z] Copying: 406/512 [MB] (202 MBps) [2024-11-20T13:52:22.585Z] Copying: 512/512 [MB] (average 202 MBps) 00:31:25.262 00:31:25.262 13:52:22 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:31:25.262 13:52:22 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:31:25.262 13:52:22 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:31:25.262 13:52:22 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:31:25.262 [2024-11-20 13:52:22.376002] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:31:25.262 [2024-11-20 13:52:22.376081] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60929 ] 00:31:25.262 { 00:31:25.262 "subsystems": [ 00:31:25.262 { 00:31:25.262 "subsystem": "bdev", 00:31:25.262 "config": [ 00:31:25.262 { 00:31:25.262 "params": { 00:31:25.262 "block_size": 512, 00:31:25.262 "num_blocks": 1048576, 00:31:25.262 "name": "malloc0" 00:31:25.262 }, 00:31:25.262 "method": "bdev_malloc_create" 00:31:25.262 }, 00:31:25.262 { 00:31:25.262 "params": { 00:31:25.262 "block_size": 512, 00:31:25.262 "num_blocks": 1048576, 00:31:25.262 "name": "malloc1" 00:31:25.262 }, 00:31:25.262 "method": "bdev_malloc_create" 00:31:25.262 }, 00:31:25.262 { 00:31:25.262 "method": "bdev_wait_for_examine" 00:31:25.262 } 00:31:25.262 ] 00:31:25.262 } 00:31:25.262 ] 00:31:25.262 } 00:31:25.262 [2024-11-20 13:52:22.526964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:25.522 [2024-11-20 13:52:22.585645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:25.522 [2024-11-20 13:52:22.629514] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:26.901  [2024-11-20T13:52:25.164Z] Copying: 199/512 [MB] (199 MBps) [2024-11-20T13:52:25.736Z] Copying: 401/512 [MB] (201 MBps) [2024-11-20T13:52:26.676Z] Copying: 512/512 [MB] (average 201 MBps) 00:31:29.353 00:31:29.353 00:31:29.353 real 0m8.081s 00:31:29.353 user 0m6.864s 00:31:29.353 sys 0m1.066s 00:31:29.353 ************************************ 00:31:29.353 END TEST dd_malloc_copy 00:31:29.353 ************************************ 00:31:29.353 13:52:26 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:29.353 13:52:26 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:31:29.353 ************************************ 00:31:29.353 END TEST spdk_dd_malloc 00:31:29.353 ************************************ 00:31:29.353 00:31:29.353 real 0m8.395s 00:31:29.353 user 0m7.020s 00:31:29.353 sys 0m1.244s 00:31:29.353 13:52:26 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:29.353 13:52:26 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:31:29.353 13:52:26 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:31:29.353 13:52:26 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:29.353 13:52:26 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:29.353 13:52:26 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:31:29.353 ************************************ 00:31:29.353 START TEST spdk_dd_bdev_to_bdev 00:31:29.353 ************************************ 00:31:29.353 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:31:29.353 * Looking for test storage... 00:31:29.353 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:31:29.353 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:29.353 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # lcov --version 00:31:29.353 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:29.614 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:29.614 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:29.614 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:29.614 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:29.614 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:31:29.614 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:31:29.614 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:31:29.614 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:31:29.614 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:31:29.614 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:31:29.614 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:31:29.614 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:29.614 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:31:29.614 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:31:29.614 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:29.614 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:29.614 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:31:29.614 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:31:29.614 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:29.614 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:31:29.614 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:31:29.614 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:31:29.614 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:31:29.614 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:29.614 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:31:29.614 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:31:29.614 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:29.614 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:29.614 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:31:29.614 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:29.614 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:29.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:29.614 --rc genhtml_branch_coverage=1 00:31:29.614 --rc genhtml_function_coverage=1 00:31:29.614 --rc genhtml_legend=1 00:31:29.614 --rc geninfo_all_blocks=1 00:31:29.614 --rc geninfo_unexecuted_blocks=1 00:31:29.614 00:31:29.614 ' 00:31:29.614 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:29.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:29.614 --rc genhtml_branch_coverage=1 00:31:29.614 --rc genhtml_function_coverage=1 00:31:29.614 --rc genhtml_legend=1 00:31:29.614 --rc geninfo_all_blocks=1 00:31:29.614 --rc geninfo_unexecuted_blocks=1 00:31:29.614 00:31:29.614 ' 00:31:29.614 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:29.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:29.614 --rc genhtml_branch_coverage=1 00:31:29.614 --rc genhtml_function_coverage=1 00:31:29.614 --rc genhtml_legend=1 00:31:29.614 --rc geninfo_all_blocks=1 00:31:29.614 --rc geninfo_unexecuted_blocks=1 00:31:29.614 00:31:29.614 ' 00:31:29.614 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:29.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:29.614 --rc genhtml_branch_coverage=1 00:31:29.614 --rc genhtml_function_coverage=1 00:31:29.614 --rc genhtml_legend=1 00:31:29.615 --rc geninfo_all_blocks=1 00:31:29.615 --rc geninfo_unexecuted_blocks=1 00:31:29.615 00:31:29.615 ' 00:31:29.615 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:29.615 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:31:29.615 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:29.615 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:29.615 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:29.615 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.615 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.615 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.615 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:31:29.615 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.615 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:31:29.615 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:31:29.615 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:31:29.615 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:31:29.615 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:31:29.615 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:31:29.615 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:31:29.615 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:31:29.615 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:31:29.615 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:31:29.615 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:31:29.615 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:31:29.615 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:31:29.615 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:31:29.615 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:31:29.615 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:31:29.615 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:31:29.615 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:31:29.615 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:31:29.615 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:31:29.615 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:29.615 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:31:29.615 ************************************ 00:31:29.615 START TEST dd_inflate_file 00:31:29.615 ************************************ 00:31:29.615 13:52:26 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:31:29.615 [2024-11-20 13:52:26.800907] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:31:29.615 [2024-11-20 13:52:26.800999] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61047 ] 00:31:29.884 [2024-11-20 13:52:26.954490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:29.884 [2024-11-20 13:52:27.037179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:29.884 [2024-11-20 13:52:27.115559] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:30.145  [2024-11-20T13:52:27.727Z] Copying: 64/64 [MB] (average 1254 MBps) 00:31:30.404 00:31:30.404 ************************************ 00:31:30.404 END TEST dd_inflate_file 00:31:30.404 ************************************ 00:31:30.404 00:31:30.404 real 0m0.758s 00:31:30.404 user 0m0.458s 00:31:30.404 sys 0m0.418s 00:31:30.404 13:52:27 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:30.404 13:52:27 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:31:30.404 13:52:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:31:30.404 13:52:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:31:30.404 13:52:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:31:30.404 13:52:27 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:31:30.404 13:52:27 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:30.404 13:52:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:31:30.404 13:52:27 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:31:30.404 13:52:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:31:30.404 13:52:27 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:31:30.404 ************************************ 00:31:30.404 START TEST dd_copy_to_out_bdev 00:31:30.404 ************************************ 00:31:30.404 13:52:27 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:31:30.404 [2024-11-20 13:52:27.621461] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:31:30.404 [2024-11-20 13:52:27.621554] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61084 ] 00:31:30.404 { 00:31:30.404 "subsystems": [ 00:31:30.404 { 00:31:30.404 "subsystem": "bdev", 00:31:30.404 "config": [ 00:31:30.404 { 00:31:30.404 "params": { 00:31:30.404 "trtype": "pcie", 00:31:30.404 "traddr": "0000:00:10.0", 00:31:30.404 "name": "Nvme0" 00:31:30.404 }, 00:31:30.404 "method": "bdev_nvme_attach_controller" 00:31:30.404 }, 00:31:30.404 { 00:31:30.404 "params": { 00:31:30.404 "trtype": "pcie", 00:31:30.404 "traddr": "0000:00:11.0", 00:31:30.404 "name": "Nvme1" 00:31:30.404 }, 00:31:30.404 "method": "bdev_nvme_attach_controller" 00:31:30.404 }, 00:31:30.404 { 00:31:30.404 "method": "bdev_wait_for_examine" 00:31:30.404 } 00:31:30.404 ] 00:31:30.404 } 00:31:30.404 ] 00:31:30.404 } 00:31:30.663 [2024-11-20 13:52:27.774035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:30.663 [2024-11-20 13:52:27.859896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:30.663 [2024-11-20 13:52:27.940114] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:32.048  [2024-11-20T13:52:29.371Z] Copying: 64/64 [MB] (average 72 MBps) 00:31:32.048 00:31:32.048 00:31:32.048 real 0m1.691s 00:31:32.048 user 0m1.395s 00:31:32.048 sys 0m1.316s 00:31:32.048 13:52:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:32.048 13:52:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:31:32.048 ************************************ 00:31:32.048 END TEST dd_copy_to_out_bdev 00:31:32.048 ************************************ 00:31:32.048 13:52:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:31:32.048 13:52:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:31:32.048 13:52:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:32.048 13:52:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:32.048 13:52:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:31:32.048 ************************************ 00:31:32.048 START TEST dd_offset_magic 00:31:32.048 ************************************ 00:31:32.048 13:52:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1129 -- # offset_magic 00:31:32.048 13:52:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:31:32.048 13:52:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:31:32.048 13:52:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:31:32.048 13:52:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:31:32.048 13:52:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:31:32.048 13:52:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:31:32.048 13:52:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:31:32.048 13:52:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:31:32.308 [2024-11-20 13:52:29.391392] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:31:32.308 [2024-11-20 13:52:29.391537] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61126 ] 00:31:32.308 { 00:31:32.308 "subsystems": [ 00:31:32.308 { 00:31:32.308 "subsystem": "bdev", 00:31:32.308 "config": [ 00:31:32.308 { 00:31:32.308 "params": { 00:31:32.308 "trtype": "pcie", 00:31:32.308 "traddr": "0000:00:10.0", 00:31:32.308 "name": "Nvme0" 00:31:32.308 }, 00:31:32.308 "method": "bdev_nvme_attach_controller" 00:31:32.308 }, 00:31:32.308 { 00:31:32.308 "params": { 00:31:32.308 "trtype": "pcie", 00:31:32.308 "traddr": "0000:00:11.0", 00:31:32.308 "name": "Nvme1" 00:31:32.308 }, 00:31:32.308 "method": "bdev_nvme_attach_controller" 00:31:32.308 }, 00:31:32.308 { 00:31:32.308 "method": "bdev_wait_for_examine" 00:31:32.308 } 00:31:32.308 ] 00:31:32.308 } 00:31:32.308 ] 00:31:32.308 } 00:31:32.308 [2024-11-20 13:52:29.543051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:32.308 [2024-11-20 13:52:29.600964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:32.568 [2024-11-20 13:52:29.645523] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:32.831  [2024-11-20T13:52:30.414Z] Copying: 65/65 [MB] (average 613 MBps) 00:31:33.091 00:31:33.091 13:52:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:31:33.091 13:52:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:31:33.091 13:52:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:31:33.091 13:52:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:31:33.091 [2024-11-20 13:52:30.226702] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:31:33.091 [2024-11-20 13:52:30.226951] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61146 ] 00:31:33.091 { 00:31:33.091 "subsystems": [ 00:31:33.091 { 00:31:33.091 "subsystem": "bdev", 00:31:33.091 "config": [ 00:31:33.091 { 00:31:33.091 "params": { 00:31:33.091 "trtype": "pcie", 00:31:33.091 "traddr": "0000:00:10.0", 00:31:33.091 "name": "Nvme0" 00:31:33.091 }, 00:31:33.091 "method": "bdev_nvme_attach_controller" 00:31:33.091 }, 00:31:33.091 { 00:31:33.091 "params": { 00:31:33.091 "trtype": "pcie", 00:31:33.091 "traddr": "0000:00:11.0", 00:31:33.091 "name": "Nvme1" 00:31:33.091 }, 00:31:33.091 "method": "bdev_nvme_attach_controller" 00:31:33.091 }, 00:31:33.091 { 00:31:33.091 "method": "bdev_wait_for_examine" 00:31:33.091 } 00:31:33.091 ] 00:31:33.091 } 00:31:33.091 ] 00:31:33.091 } 00:31:33.091 [2024-11-20 13:52:30.378758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:33.352 [2024-11-20 13:52:30.436628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:33.352 [2024-11-20 13:52:30.480395] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:33.352  [2024-11-20T13:52:30.935Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:31:33.612 00:31:33.612 13:52:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:31:33.612 13:52:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:31:33.612 13:52:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:31:33.612 13:52:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:31:33.612 13:52:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:31:33.612 13:52:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:31:33.612 13:52:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:31:33.612 { 00:31:33.612 "subsystems": [ 00:31:33.612 { 00:31:33.612 "subsystem": "bdev", 00:31:33.612 "config": [ 00:31:33.612 { 00:31:33.612 "params": { 00:31:33.612 "trtype": "pcie", 00:31:33.612 "traddr": "0000:00:10.0", 00:31:33.612 "name": "Nvme0" 00:31:33.612 }, 00:31:33.612 "method": "bdev_nvme_attach_controller" 00:31:33.612 }, 00:31:33.612 { 00:31:33.612 "params": { 00:31:33.612 "trtype": "pcie", 00:31:33.612 "traddr": "0000:00:11.0", 00:31:33.612 "name": "Nvme1" 00:31:33.612 }, 00:31:33.612 "method": "bdev_nvme_attach_controller" 00:31:33.612 }, 00:31:33.612 { 00:31:33.612 "method": "bdev_wait_for_examine" 00:31:33.612 } 00:31:33.612 ] 00:31:33.612 } 00:31:33.612 ] 00:31:33.612 } 00:31:33.612 [2024-11-20 13:52:30.885427] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:31:33.612 [2024-11-20 13:52:30.885522] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61162 ] 00:31:33.871 [2024-11-20 13:52:31.035105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:33.871 [2024-11-20 13:52:31.104009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:33.871 [2024-11-20 13:52:31.148194] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:34.131  [2024-11-20T13:52:31.713Z] Copying: 65/65 [MB] (average 942 MBps) 00:31:34.390 00:31:34.390 13:52:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:31:34.390 13:52:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:31:34.390 13:52:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:31:34.390 13:52:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:31:34.650 [2024-11-20 13:52:31.738095] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:31:34.650 [2024-11-20 13:52:31.738280] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61177 ] 00:31:34.650 { 00:31:34.650 "subsystems": [ 00:31:34.650 { 00:31:34.650 "subsystem": "bdev", 00:31:34.650 "config": [ 00:31:34.650 { 00:31:34.650 "params": { 00:31:34.650 "trtype": "pcie", 00:31:34.650 "traddr": "0000:00:10.0", 00:31:34.650 "name": "Nvme0" 00:31:34.650 }, 00:31:34.650 "method": "bdev_nvme_attach_controller" 00:31:34.650 }, 00:31:34.650 { 00:31:34.650 "params": { 00:31:34.650 "trtype": "pcie", 00:31:34.650 "traddr": "0000:00:11.0", 00:31:34.650 "name": "Nvme1" 00:31:34.650 }, 00:31:34.650 "method": "bdev_nvme_attach_controller" 00:31:34.650 }, 00:31:34.650 { 00:31:34.650 "method": "bdev_wait_for_examine" 00:31:34.650 } 00:31:34.650 ] 00:31:34.650 } 00:31:34.650 ] 00:31:34.650 } 00:31:34.650 [2024-11-20 13:52:31.886836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:34.650 [2024-11-20 13:52:31.941508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:34.910 [2024-11-20 13:52:31.981547] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:34.910  [2024-11-20T13:52:32.493Z] Copying: 1024/1024 [kB] (average 333 MBps) 00:31:35.170 00:31:35.170 13:52:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:31:35.170 13:52:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:31:35.170 00:31:35.170 real 0m2.982s 00:31:35.170 user 0m2.178s 00:31:35.170 sys 0m0.870s 00:31:35.170 13:52:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:35.170 13:52:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:31:35.170 ************************************ 00:31:35.170 END TEST dd_offset_magic 00:31:35.170 ************************************ 00:31:35.170 13:52:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:31:35.170 13:52:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:31:35.170 13:52:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:31:35.170 13:52:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:31:35.170 13:52:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:31:35.170 13:52:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:31:35.170 13:52:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:31:35.170 13:52:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:31:35.170 13:52:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:31:35.170 13:52:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:31:35.170 13:52:32 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:31:35.170 [2024-11-20 13:52:32.423122] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:31:35.170 [2024-11-20 13:52:32.423209] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61214 ] 00:31:35.170 { 00:31:35.170 "subsystems": [ 00:31:35.170 { 00:31:35.170 "subsystem": "bdev", 00:31:35.170 "config": [ 00:31:35.170 { 00:31:35.170 "params": { 00:31:35.170 "trtype": "pcie", 00:31:35.170 "traddr": "0000:00:10.0", 00:31:35.170 "name": "Nvme0" 00:31:35.170 }, 00:31:35.170 "method": "bdev_nvme_attach_controller" 00:31:35.170 }, 00:31:35.170 { 00:31:35.170 "params": { 00:31:35.170 "trtype": "pcie", 00:31:35.170 "traddr": "0000:00:11.0", 00:31:35.170 "name": "Nvme1" 00:31:35.170 }, 00:31:35.170 "method": "bdev_nvme_attach_controller" 00:31:35.170 }, 00:31:35.170 { 00:31:35.170 "method": "bdev_wait_for_examine" 00:31:35.170 } 00:31:35.170 ] 00:31:35.170 } 00:31:35.170 ] 00:31:35.170 } 00:31:35.430 [2024-11-20 13:52:32.573240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:35.430 [2024-11-20 13:52:32.617607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:35.430 [2024-11-20 13:52:32.657647] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:35.689  [2024-11-20T13:52:33.013Z] Copying: 5120/5120 [kB] (average 833 MBps) 00:31:35.690 00:31:35.690 13:52:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:31:35.690 13:52:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:31:35.690 13:52:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:31:35.690 13:52:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:31:35.690 13:52:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:31:35.690 13:52:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:31:35.690 13:52:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:31:35.690 13:52:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:31:35.690 13:52:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:31:35.690 13:52:33 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:31:35.950 [2024-11-20 13:52:33.056812] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:31:35.950 [2024-11-20 13:52:33.057368] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61235 ] 00:31:35.950 { 00:31:35.950 "subsystems": [ 00:31:35.950 { 00:31:35.950 "subsystem": "bdev", 00:31:35.950 "config": [ 00:31:35.950 { 00:31:35.950 "params": { 00:31:35.950 "trtype": "pcie", 00:31:35.950 "traddr": "0000:00:10.0", 00:31:35.950 "name": "Nvme0" 00:31:35.950 }, 00:31:35.950 "method": "bdev_nvme_attach_controller" 00:31:35.950 }, 00:31:35.950 { 00:31:35.950 "params": { 00:31:35.950 "trtype": "pcie", 00:31:35.950 "traddr": "0000:00:11.0", 00:31:35.950 "name": "Nvme1" 00:31:35.950 }, 00:31:35.950 "method": "bdev_nvme_attach_controller" 00:31:35.950 }, 00:31:35.950 { 00:31:35.950 "method": "bdev_wait_for_examine" 00:31:35.950 } 00:31:35.950 ] 00:31:35.950 } 00:31:35.950 ] 00:31:35.950 } 00:31:35.950 [2024-11-20 13:52:33.209149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:35.950 [2024-11-20 13:52:33.254276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:36.217 [2024-11-20 13:52:33.294555] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:36.217  [2024-11-20T13:52:33.848Z] Copying: 5120/5120 [kB] (average 625 MBps) 00:31:36.525 00:31:36.525 13:52:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:31:36.525 00:31:36.525 real 0m7.192s 00:31:36.525 user 0m5.153s 00:31:36.525 sys 0m3.361s 00:31:36.525 13:52:33 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:36.525 13:52:33 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:31:36.525 ************************************ 00:31:36.525 END TEST spdk_dd_bdev_to_bdev 00:31:36.525 ************************************ 00:31:36.525 13:52:33 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:31:36.525 13:52:33 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:31:36.525 13:52:33 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:36.526 13:52:33 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:36.526 13:52:33 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:31:36.526 ************************************ 00:31:36.526 START TEST spdk_dd_uring 00:31:36.526 ************************************ 00:31:36.526 13:52:33 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:31:36.787 * Looking for test storage... 00:31:36.787 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:31:36.787 13:52:33 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:36.787 13:52:33 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # lcov --version 00:31:36.787 13:52:33 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:36.787 13:52:33 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:36.787 13:52:33 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:36.787 13:52:33 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:36.787 13:52:33 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:36.787 13:52:33 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:31:36.787 13:52:33 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:31:36.787 13:52:33 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:31:36.787 13:52:33 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:31:36.787 13:52:33 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:31:36.787 13:52:33 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:31:36.787 13:52:33 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:31:36.787 13:52:33 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:36.787 13:52:33 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:31:36.787 13:52:33 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:31:36.787 13:52:33 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:36.787 13:52:33 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:36.787 13:52:33 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:31:36.787 13:52:33 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:31:36.787 13:52:33 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:36.787 13:52:33 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:31:36.787 13:52:33 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:31:36.787 13:52:33 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:31:36.787 13:52:33 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:31:36.787 13:52:33 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:36.787 13:52:33 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:31:36.787 13:52:33 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:31:36.787 13:52:33 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:36.787 13:52:33 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:36.787 13:52:33 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:31:36.787 13:52:33 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:36.787 13:52:33 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:36.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:36.787 --rc genhtml_branch_coverage=1 00:31:36.787 --rc genhtml_function_coverage=1 00:31:36.787 --rc genhtml_legend=1 00:31:36.787 --rc geninfo_all_blocks=1 00:31:36.787 --rc geninfo_unexecuted_blocks=1 00:31:36.787 00:31:36.787 ' 00:31:36.787 13:52:33 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:36.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:36.787 --rc genhtml_branch_coverage=1 00:31:36.787 --rc genhtml_function_coverage=1 00:31:36.787 --rc genhtml_legend=1 00:31:36.787 --rc geninfo_all_blocks=1 00:31:36.787 --rc geninfo_unexecuted_blocks=1 00:31:36.787 00:31:36.787 ' 00:31:36.787 13:52:33 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:36.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:36.787 --rc genhtml_branch_coverage=1 00:31:36.787 --rc genhtml_function_coverage=1 00:31:36.787 --rc genhtml_legend=1 00:31:36.787 --rc geninfo_all_blocks=1 00:31:36.787 --rc geninfo_unexecuted_blocks=1 00:31:36.787 00:31:36.787 ' 00:31:36.787 13:52:33 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:36.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:36.787 --rc genhtml_branch_coverage=1 00:31:36.787 --rc genhtml_function_coverage=1 00:31:36.787 --rc genhtml_legend=1 00:31:36.787 --rc geninfo_all_blocks=1 00:31:36.787 --rc geninfo_unexecuted_blocks=1 00:31:36.787 00:31:36.787 ' 00:31:36.787 13:52:33 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:36.787 13:52:33 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:31:36.787 13:52:33 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:36.787 13:52:33 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:36.787 13:52:33 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:36.787 13:52:33 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.788 13:52:33 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.788 13:52:33 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.788 13:52:33 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:31:36.788 13:52:33 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.788 13:52:33 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:31:36.788 13:52:33 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:36.788 13:52:33 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:36.788 13:52:33 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:31:36.788 ************************************ 00:31:36.788 START TEST dd_uring_copy 00:31:36.788 ************************************ 00:31:36.788 13:52:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1129 -- # uring_zram_copy 00:31:36.788 13:52:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:31:36.788 13:52:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:31:36.788 13:52:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:31:36.788 13:52:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:31:36.788 13:52:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:31:36.788 13:52:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:31:36.788 13:52:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:31:36.788 13:52:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:31:36.788 13:52:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:31:36.788 13:52:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:31:36.788 13:52:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:31:36.788 13:52:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:31:36.788 13:52:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:31:36.788 13:52:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:31:36.788 13:52:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:31:36.788 13:52:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:31:36.788 13:52:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:31:36.788 13:52:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:31:36.788 13:52:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:31:36.788 13:52:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:31:36.788 13:52:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:31:36.788 13:52:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:31:36.788 13:52:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:31:36.788 13:52:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:31:36.788 13:52:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:31:36.788 13:52:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=smhhdlqz0960erj916w01wy51xhpqy8rfhu0leddrm99ho898p5n2tzhq3ocaleliyrl9ylrv63e4z9lt9g7dujio6lowg0v3d0wsao8soq63m7deonlp1qvo5xsyq8bxphbn40x3qdnioxxxlprn3pnhuplvyn30vi2tt0s21fw9jfpe0g0lmestb37miefbgjjzw44o9uugnqugglj3xvixttt7uhckaedcfef5yot55dp8ybadxcr4efcali004shvxcv9p30he8p97hr6v07ekxsfidhmjof6g4t5a5f3v6xtdwrgdt7ruq57q1cmyp8ytzadootbjns85rm81f3wipzy4jbii94gl1tu0n4vj40y44o311f2u47t4l2a2hp3ajcr2274id5ggvwzs8ic9neb3vk0tb7k6am89guuwjhu15jfdc1inefzt7h4pwbwixdqi8w3ynxqz8bzkkyzyoulf04fi4vrpi2jjnqmvm3rzfe2rb9ihho75j8xwsivdz00blcfswhhq950p2dmpd75b4zp3gd8rpu97pejrtjjt9gh35nxucfvilr35igqefrd7xrwjykkp2honpz6p1qhkme9cfdsev2a62o5b4lar48vfontaryin7v3btkb5dqldt7lf7c3k65zhd2rw1qqaa2wz3g94zur1xalnjpjid2j7885bmtuum731y5bafd0bki74r4k0nddmou9le3ixfhu76q4ptuctb13uiiukycv0lfy835r48nn78koux3eoriyydb1px6voylkt64gzrh8sk7jr83yqosratfyzshxw8a4goxivn2rgyu7vhy1iebplhkn6ben4dosxvuehaxzq3vpb1f3dxba9a1zhi6ilj6b6ufne2hj0aydvuuaramzvlrxcjvh7fwg3w3ahxet9u22tj1ar8sc94wemoafiy01fbenupxxyaw6hv6xd9ninnn26met9mzf0e2biq2feeqlysj6u8t95j8x81zr02c9ezns84z 00:31:36.788 13:52:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo smhhdlqz0960erj916w01wy51xhpqy8rfhu0leddrm99ho898p5n2tzhq3ocaleliyrl9ylrv63e4z9lt9g7dujio6lowg0v3d0wsao8soq63m7deonlp1qvo5xsyq8bxphbn40x3qdnioxxxlprn3pnhuplvyn30vi2tt0s21fw9jfpe0g0lmestb37miefbgjjzw44o9uugnqugglj3xvixttt7uhckaedcfef5yot55dp8ybadxcr4efcali004shvxcv9p30he8p97hr6v07ekxsfidhmjof6g4t5a5f3v6xtdwrgdt7ruq57q1cmyp8ytzadootbjns85rm81f3wipzy4jbii94gl1tu0n4vj40y44o311f2u47t4l2a2hp3ajcr2274id5ggvwzs8ic9neb3vk0tb7k6am89guuwjhu15jfdc1inefzt7h4pwbwixdqi8w3ynxqz8bzkkyzyoulf04fi4vrpi2jjnqmvm3rzfe2rb9ihho75j8xwsivdz00blcfswhhq950p2dmpd75b4zp3gd8rpu97pejrtjjt9gh35nxucfvilr35igqefrd7xrwjykkp2honpz6p1qhkme9cfdsev2a62o5b4lar48vfontaryin7v3btkb5dqldt7lf7c3k65zhd2rw1qqaa2wz3g94zur1xalnjpjid2j7885bmtuum731y5bafd0bki74r4k0nddmou9le3ixfhu76q4ptuctb13uiiukycv0lfy835r48nn78koux3eoriyydb1px6voylkt64gzrh8sk7jr83yqosratfyzshxw8a4goxivn2rgyu7vhy1iebplhkn6ben4dosxvuehaxzq3vpb1f3dxba9a1zhi6ilj6b6ufne2hj0aydvuuaramzvlrxcjvh7fwg3w3ahxet9u22tj1ar8sc94wemoafiy01fbenupxxyaw6hv6xd9ninnn26met9mzf0e2biq2feeqlysj6u8t95j8x81zr02c9ezns84z 00:31:36.788 13:52:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:31:36.788 [2024-11-20 13:52:34.061415] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:31:36.788 [2024-11-20 13:52:34.061473] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61314 ] 00:31:37.048 [2024-11-20 13:52:34.202946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:37.048 [2024-11-20 13:52:34.253409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:37.049 [2024-11-20 13:52:34.293506] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:37.987  [2024-11-20T13:52:35.571Z] Copying: 511/511 [MB] (average 1302 MBps) 00:31:38.248 00:31:38.248 13:52:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:31:38.248 13:52:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:31:38.248 13:52:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:31:38.248 13:52:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:31:38.248 [2024-11-20 13:52:35.413017] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:31:38.248 [2024-11-20 13:52:35.413094] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61332 ] 00:31:38.248 { 00:31:38.248 "subsystems": [ 00:31:38.248 { 00:31:38.248 "subsystem": "bdev", 00:31:38.248 "config": [ 00:31:38.248 { 00:31:38.248 "params": { 00:31:38.248 "block_size": 512, 00:31:38.248 "num_blocks": 1048576, 00:31:38.248 "name": "malloc0" 00:31:38.248 }, 00:31:38.248 "method": "bdev_malloc_create" 00:31:38.248 }, 00:31:38.248 { 00:31:38.248 "params": { 00:31:38.248 "filename": "/dev/zram1", 00:31:38.248 "name": "uring0" 00:31:38.248 }, 00:31:38.248 "method": "bdev_uring_create" 00:31:38.248 }, 00:31:38.248 { 00:31:38.248 "method": "bdev_wait_for_examine" 00:31:38.248 } 00:31:38.248 ] 00:31:38.248 } 00:31:38.248 ] 00:31:38.248 } 00:31:38.248 [2024-11-20 13:52:35.560357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:38.507 [2024-11-20 13:52:35.611877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:38.507 [2024-11-20 13:52:35.688449] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:39.888  [2024-11-20T13:52:38.147Z] Copying: 264/512 [MB] (264 MBps) [2024-11-20T13:52:38.405Z] Copying: 512/512 [MB] (average 266 MBps) 00:31:41.082 00:31:41.083 13:52:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:31:41.083 13:52:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:31:41.083 13:52:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:31:41.083 13:52:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:31:41.083 [2024-11-20 13:52:38.275814] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:31:41.083 [2024-11-20 13:52:38.275886] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61376 ] 00:31:41.083 { 00:31:41.083 "subsystems": [ 00:31:41.083 { 00:31:41.083 "subsystem": "bdev", 00:31:41.083 "config": [ 00:31:41.083 { 00:31:41.083 "params": { 00:31:41.083 "block_size": 512, 00:31:41.083 "num_blocks": 1048576, 00:31:41.083 "name": "malloc0" 00:31:41.083 }, 00:31:41.083 "method": "bdev_malloc_create" 00:31:41.083 }, 00:31:41.083 { 00:31:41.083 "params": { 00:31:41.083 "filename": "/dev/zram1", 00:31:41.083 "name": "uring0" 00:31:41.083 }, 00:31:41.083 "method": "bdev_uring_create" 00:31:41.083 }, 00:31:41.083 { 00:31:41.083 "method": "bdev_wait_for_examine" 00:31:41.083 } 00:31:41.083 ] 00:31:41.083 } 00:31:41.083 ] 00:31:41.083 } 00:31:41.341 [2024-11-20 13:52:38.423635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:41.341 [2024-11-20 13:52:38.499255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:41.341 [2024-11-20 13:52:38.542300] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:42.716  [2024-11-20T13:52:40.975Z] Copying: 167/512 [MB] (167 MBps) [2024-11-20T13:52:41.912Z] Copying: 316/512 [MB] (148 MBps) [2024-11-20T13:52:42.171Z] Copying: 472/512 [MB] (155 MBps) [2024-11-20T13:52:42.738Z] Copying: 512/512 [MB] (average 152 MBps) 00:31:45.415 00:31:45.415 13:52:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:31:45.416 13:52:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ smhhdlqz0960erj916w01wy51xhpqy8rfhu0leddrm99ho898p5n2tzhq3ocaleliyrl9ylrv63e4z9lt9g7dujio6lowg0v3d0wsao8soq63m7deonlp1qvo5xsyq8bxphbn40x3qdnioxxxlprn3pnhuplvyn30vi2tt0s21fw9jfpe0g0lmestb37miefbgjjzw44o9uugnqugglj3xvixttt7uhckaedcfef5yot55dp8ybadxcr4efcali004shvxcv9p30he8p97hr6v07ekxsfidhmjof6g4t5a5f3v6xtdwrgdt7ruq57q1cmyp8ytzadootbjns85rm81f3wipzy4jbii94gl1tu0n4vj40y44o311f2u47t4l2a2hp3ajcr2274id5ggvwzs8ic9neb3vk0tb7k6am89guuwjhu15jfdc1inefzt7h4pwbwixdqi8w3ynxqz8bzkkyzyoulf04fi4vrpi2jjnqmvm3rzfe2rb9ihho75j8xwsivdz00blcfswhhq950p2dmpd75b4zp3gd8rpu97pejrtjjt9gh35nxucfvilr35igqefrd7xrwjykkp2honpz6p1qhkme9cfdsev2a62o5b4lar48vfontaryin7v3btkb5dqldt7lf7c3k65zhd2rw1qqaa2wz3g94zur1xalnjpjid2j7885bmtuum731y5bafd0bki74r4k0nddmou9le3ixfhu76q4ptuctb13uiiukycv0lfy835r48nn78koux3eoriyydb1px6voylkt64gzrh8sk7jr83yqosratfyzshxw8a4goxivn2rgyu7vhy1iebplhkn6ben4dosxvuehaxzq3vpb1f3dxba9a1zhi6ilj6b6ufne2hj0aydvuuaramzvlrxcjvh7fwg3w3ahxet9u22tj1ar8sc94wemoafiy01fbenupxxyaw6hv6xd9ninnn26met9mzf0e2biq2feeqlysj6u8t95j8x81zr02c9ezns84z == \s\m\h\h\d\l\q\z\0\9\6\0\e\r\j\9\1\6\w\0\1\w\y\5\1\x\h\p\q\y\8\r\f\h\u\0\l\e\d\d\r\m\9\9\h\o\8\9\8\p\5\n\2\t\z\h\q\3\o\c\a\l\e\l\i\y\r\l\9\y\l\r\v\6\3\e\4\z\9\l\t\9\g\7\d\u\j\i\o\6\l\o\w\g\0\v\3\d\0\w\s\a\o\8\s\o\q\6\3\m\7\d\e\o\n\l\p\1\q\v\o\5\x\s\y\q\8\b\x\p\h\b\n\4\0\x\3\q\d\n\i\o\x\x\x\l\p\r\n\3\p\n\h\u\p\l\v\y\n\3\0\v\i\2\t\t\0\s\2\1\f\w\9\j\f\p\e\0\g\0\l\m\e\s\t\b\3\7\m\i\e\f\b\g\j\j\z\w\4\4\o\9\u\u\g\n\q\u\g\g\l\j\3\x\v\i\x\t\t\t\7\u\h\c\k\a\e\d\c\f\e\f\5\y\o\t\5\5\d\p\8\y\b\a\d\x\c\r\4\e\f\c\a\l\i\0\0\4\s\h\v\x\c\v\9\p\3\0\h\e\8\p\9\7\h\r\6\v\0\7\e\k\x\s\f\i\d\h\m\j\o\f\6\g\4\t\5\a\5\f\3\v\6\x\t\d\w\r\g\d\t\7\r\u\q\5\7\q\1\c\m\y\p\8\y\t\z\a\d\o\o\t\b\j\n\s\8\5\r\m\8\1\f\3\w\i\p\z\y\4\j\b\i\i\9\4\g\l\1\t\u\0\n\4\v\j\4\0\y\4\4\o\3\1\1\f\2\u\4\7\t\4\l\2\a\2\h\p\3\a\j\c\r\2\2\7\4\i\d\5\g\g\v\w\z\s\8\i\c\9\n\e\b\3\v\k\0\t\b\7\k\6\a\m\8\9\g\u\u\w\j\h\u\1\5\j\f\d\c\1\i\n\e\f\z\t\7\h\4\p\w\b\w\i\x\d\q\i\8\w\3\y\n\x\q\z\8\b\z\k\k\y\z\y\o\u\l\f\0\4\f\i\4\v\r\p\i\2\j\j\n\q\m\v\m\3\r\z\f\e\2\r\b\9\i\h\h\o\7\5\j\8\x\w\s\i\v\d\z\0\0\b\l\c\f\s\w\h\h\q\9\5\0\p\2\d\m\p\d\7\5\b\4\z\p\3\g\d\8\r\p\u\9\7\p\e\j\r\t\j\j\t\9\g\h\3\5\n\x\u\c\f\v\i\l\r\3\5\i\g\q\e\f\r\d\7\x\r\w\j\y\k\k\p\2\h\o\n\p\z\6\p\1\q\h\k\m\e\9\c\f\d\s\e\v\2\a\6\2\o\5\b\4\l\a\r\4\8\v\f\o\n\t\a\r\y\i\n\7\v\3\b\t\k\b\5\d\q\l\d\t\7\l\f\7\c\3\k\6\5\z\h\d\2\r\w\1\q\q\a\a\2\w\z\3\g\9\4\z\u\r\1\x\a\l\n\j\p\j\i\d\2\j\7\8\8\5\b\m\t\u\u\m\7\3\1\y\5\b\a\f\d\0\b\k\i\7\4\r\4\k\0\n\d\d\m\o\u\9\l\e\3\i\x\f\h\u\7\6\q\4\p\t\u\c\t\b\1\3\u\i\i\u\k\y\c\v\0\l\f\y\8\3\5\r\4\8\n\n\7\8\k\o\u\x\3\e\o\r\i\y\y\d\b\1\p\x\6\v\o\y\l\k\t\6\4\g\z\r\h\8\s\k\7\j\r\8\3\y\q\o\s\r\a\t\f\y\z\s\h\x\w\8\a\4\g\o\x\i\v\n\2\r\g\y\u\7\v\h\y\1\i\e\b\p\l\h\k\n\6\b\e\n\4\d\o\s\x\v\u\e\h\a\x\z\q\3\v\p\b\1\f\3\d\x\b\a\9\a\1\z\h\i\6\i\l\j\6\b\6\u\f\n\e\2\h\j\0\a\y\d\v\u\u\a\r\a\m\z\v\l\r\x\c\j\v\h\7\f\w\g\3\w\3\a\h\x\e\t\9\u\2\2\t\j\1\a\r\8\s\c\9\4\w\e\m\o\a\f\i\y\0\1\f\b\e\n\u\p\x\x\y\a\w\6\h\v\6\x\d\9\n\i\n\n\n\2\6\m\e\t\9\m\z\f\0\e\2\b\i\q\2\f\e\e\q\l\y\s\j\6\u\8\t\9\5\j\8\x\8\1\z\r\0\2\c\9\e\z\n\s\8\4\z ]] 00:31:45.416 13:52:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:31:45.416 13:52:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ smhhdlqz0960erj916w01wy51xhpqy8rfhu0leddrm99ho898p5n2tzhq3ocaleliyrl9ylrv63e4z9lt9g7dujio6lowg0v3d0wsao8soq63m7deonlp1qvo5xsyq8bxphbn40x3qdnioxxxlprn3pnhuplvyn30vi2tt0s21fw9jfpe0g0lmestb37miefbgjjzw44o9uugnqugglj3xvixttt7uhckaedcfef5yot55dp8ybadxcr4efcali004shvxcv9p30he8p97hr6v07ekxsfidhmjof6g4t5a5f3v6xtdwrgdt7ruq57q1cmyp8ytzadootbjns85rm81f3wipzy4jbii94gl1tu0n4vj40y44o311f2u47t4l2a2hp3ajcr2274id5ggvwzs8ic9neb3vk0tb7k6am89guuwjhu15jfdc1inefzt7h4pwbwixdqi8w3ynxqz8bzkkyzyoulf04fi4vrpi2jjnqmvm3rzfe2rb9ihho75j8xwsivdz00blcfswhhq950p2dmpd75b4zp3gd8rpu97pejrtjjt9gh35nxucfvilr35igqefrd7xrwjykkp2honpz6p1qhkme9cfdsev2a62o5b4lar48vfontaryin7v3btkb5dqldt7lf7c3k65zhd2rw1qqaa2wz3g94zur1xalnjpjid2j7885bmtuum731y5bafd0bki74r4k0nddmou9le3ixfhu76q4ptuctb13uiiukycv0lfy835r48nn78koux3eoriyydb1px6voylkt64gzrh8sk7jr83yqosratfyzshxw8a4goxivn2rgyu7vhy1iebplhkn6ben4dosxvuehaxzq3vpb1f3dxba9a1zhi6ilj6b6ufne2hj0aydvuuaramzvlrxcjvh7fwg3w3ahxet9u22tj1ar8sc94wemoafiy01fbenupxxyaw6hv6xd9ninnn26met9mzf0e2biq2feeqlysj6u8t95j8x81zr02c9ezns84z == \s\m\h\h\d\l\q\z\0\9\6\0\e\r\j\9\1\6\w\0\1\w\y\5\1\x\h\p\q\y\8\r\f\h\u\0\l\e\d\d\r\m\9\9\h\o\8\9\8\p\5\n\2\t\z\h\q\3\o\c\a\l\e\l\i\y\r\l\9\y\l\r\v\6\3\e\4\z\9\l\t\9\g\7\d\u\j\i\o\6\l\o\w\g\0\v\3\d\0\w\s\a\o\8\s\o\q\6\3\m\7\d\e\o\n\l\p\1\q\v\o\5\x\s\y\q\8\b\x\p\h\b\n\4\0\x\3\q\d\n\i\o\x\x\x\l\p\r\n\3\p\n\h\u\p\l\v\y\n\3\0\v\i\2\t\t\0\s\2\1\f\w\9\j\f\p\e\0\g\0\l\m\e\s\t\b\3\7\m\i\e\f\b\g\j\j\z\w\4\4\o\9\u\u\g\n\q\u\g\g\l\j\3\x\v\i\x\t\t\t\7\u\h\c\k\a\e\d\c\f\e\f\5\y\o\t\5\5\d\p\8\y\b\a\d\x\c\r\4\e\f\c\a\l\i\0\0\4\s\h\v\x\c\v\9\p\3\0\h\e\8\p\9\7\h\r\6\v\0\7\e\k\x\s\f\i\d\h\m\j\o\f\6\g\4\t\5\a\5\f\3\v\6\x\t\d\w\r\g\d\t\7\r\u\q\5\7\q\1\c\m\y\p\8\y\t\z\a\d\o\o\t\b\j\n\s\8\5\r\m\8\1\f\3\w\i\p\z\y\4\j\b\i\i\9\4\g\l\1\t\u\0\n\4\v\j\4\0\y\4\4\o\3\1\1\f\2\u\4\7\t\4\l\2\a\2\h\p\3\a\j\c\r\2\2\7\4\i\d\5\g\g\v\w\z\s\8\i\c\9\n\e\b\3\v\k\0\t\b\7\k\6\a\m\8\9\g\u\u\w\j\h\u\1\5\j\f\d\c\1\i\n\e\f\z\t\7\h\4\p\w\b\w\i\x\d\q\i\8\w\3\y\n\x\q\z\8\b\z\k\k\y\z\y\o\u\l\f\0\4\f\i\4\v\r\p\i\2\j\j\n\q\m\v\m\3\r\z\f\e\2\r\b\9\i\h\h\o\7\5\j\8\x\w\s\i\v\d\z\0\0\b\l\c\f\s\w\h\h\q\9\5\0\p\2\d\m\p\d\7\5\b\4\z\p\3\g\d\8\r\p\u\9\7\p\e\j\r\t\j\j\t\9\g\h\3\5\n\x\u\c\f\v\i\l\r\3\5\i\g\q\e\f\r\d\7\x\r\w\j\y\k\k\p\2\h\o\n\p\z\6\p\1\q\h\k\m\e\9\c\f\d\s\e\v\2\a\6\2\o\5\b\4\l\a\r\4\8\v\f\o\n\t\a\r\y\i\n\7\v\3\b\t\k\b\5\d\q\l\d\t\7\l\f\7\c\3\k\6\5\z\h\d\2\r\w\1\q\q\a\a\2\w\z\3\g\9\4\z\u\r\1\x\a\l\n\j\p\j\i\d\2\j\7\8\8\5\b\m\t\u\u\m\7\3\1\y\5\b\a\f\d\0\b\k\i\7\4\r\4\k\0\n\d\d\m\o\u\9\l\e\3\i\x\f\h\u\7\6\q\4\p\t\u\c\t\b\1\3\u\i\i\u\k\y\c\v\0\l\f\y\8\3\5\r\4\8\n\n\7\8\k\o\u\x\3\e\o\r\i\y\y\d\b\1\p\x\6\v\o\y\l\k\t\6\4\g\z\r\h\8\s\k\7\j\r\8\3\y\q\o\s\r\a\t\f\y\z\s\h\x\w\8\a\4\g\o\x\i\v\n\2\r\g\y\u\7\v\h\y\1\i\e\b\p\l\h\k\n\6\b\e\n\4\d\o\s\x\v\u\e\h\a\x\z\q\3\v\p\b\1\f\3\d\x\b\a\9\a\1\z\h\i\6\i\l\j\6\b\6\u\f\n\e\2\h\j\0\a\y\d\v\u\u\a\r\a\m\z\v\l\r\x\c\j\v\h\7\f\w\g\3\w\3\a\h\x\e\t\9\u\2\2\t\j\1\a\r\8\s\c\9\4\w\e\m\o\a\f\i\y\0\1\f\b\e\n\u\p\x\x\y\a\w\6\h\v\6\x\d\9\n\i\n\n\n\2\6\m\e\t\9\m\z\f\0\e\2\b\i\q\2\f\e\e\q\l\y\s\j\6\u\8\t\9\5\j\8\x\8\1\z\r\0\2\c\9\e\z\n\s\8\4\z ]] 00:31:45.416 13:52:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:31:45.674 13:52:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:31:45.674 13:52:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:31:45.674 13:52:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:31:45.674 13:52:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:31:45.674 { 00:31:45.674 "subsystems": [ 00:31:45.674 { 00:31:45.674 "subsystem": "bdev", 00:31:45.674 "config": [ 00:31:45.674 { 00:31:45.674 "params": { 00:31:45.674 "block_size": 512, 00:31:45.674 "num_blocks": 1048576, 00:31:45.674 "name": "malloc0" 00:31:45.674 }, 00:31:45.674 "method": "bdev_malloc_create" 00:31:45.674 }, 00:31:45.674 { 00:31:45.674 "params": { 00:31:45.674 "filename": "/dev/zram1", 00:31:45.674 "name": "uring0" 00:31:45.674 }, 00:31:45.674 "method": "bdev_uring_create" 00:31:45.674 }, 00:31:45.674 { 00:31:45.674 "method": "bdev_wait_for_examine" 00:31:45.674 } 00:31:45.674 ] 00:31:45.674 } 00:31:45.674 ] 00:31:45.674 } 00:31:45.674 [2024-11-20 13:52:42.974928] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:31:45.674 [2024-11-20 13:52:42.975013] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61457 ] 00:31:45.934 [2024-11-20 13:52:43.123737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:45.934 [2024-11-20 13:52:43.208245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:45.934 [2024-11-20 13:52:43.253415] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:47.312  [2024-11-20T13:52:45.574Z] Copying: 162/512 [MB] (162 MBps) [2024-11-20T13:52:46.513Z] Copying: 324/512 [MB] (161 MBps) [2024-11-20T13:52:46.778Z] Copying: 477/512 [MB] (153 MBps) [2024-11-20T13:52:47.344Z] Copying: 512/512 [MB] (average 160 MBps) 00:31:50.021 00:31:50.021 13:52:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:31:50.021 13:52:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:31:50.021 13:52:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:31:50.021 13:52:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:31:50.021 13:52:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:31:50.021 13:52:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:31:50.021 13:52:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:31:50.021 13:52:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:31:50.021 [2024-11-20 13:52:47.181243] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:31:50.021 [2024-11-20 13:52:47.181333] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61513 ] 00:31:50.021 { 00:31:50.021 "subsystems": [ 00:31:50.021 { 00:31:50.021 "subsystem": "bdev", 00:31:50.021 "config": [ 00:31:50.021 { 00:31:50.021 "params": { 00:31:50.021 "block_size": 512, 00:31:50.021 "num_blocks": 1048576, 00:31:50.021 "name": "malloc0" 00:31:50.021 }, 00:31:50.021 "method": "bdev_malloc_create" 00:31:50.021 }, 00:31:50.021 { 00:31:50.021 "params": { 00:31:50.021 "filename": "/dev/zram1", 00:31:50.021 "name": "uring0" 00:31:50.021 }, 00:31:50.021 "method": "bdev_uring_create" 00:31:50.021 }, 00:31:50.021 { 00:31:50.021 "params": { 00:31:50.021 "name": "uring0" 00:31:50.021 }, 00:31:50.021 "method": "bdev_uring_delete" 00:31:50.021 }, 00:31:50.021 { 00:31:50.021 "method": "bdev_wait_for_examine" 00:31:50.021 } 00:31:50.021 ] 00:31:50.021 } 00:31:50.021 ] 00:31:50.021 } 00:31:50.021 [2024-11-20 13:52:47.321113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:50.280 [2024-11-20 13:52:47.403079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:50.280 [2024-11-20 13:52:47.447641] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:50.539  [2024-11-20T13:52:48.430Z] Copying: 0/0 [B] (average 0 Bps) 00:31:51.107 00:31:51.107 13:52:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:31:51.107 13:52:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:31:51.107 13:52:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:31:51.107 13:52:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:31:51.107 13:52:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:31:51.107 13:52:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # local es=0 00:31:51.107 13:52:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:31:51.107 13:52:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:51.107 13:52:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:51.107 13:52:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:51.107 13:52:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:51.107 13:52:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:51.107 13:52:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:51.107 13:52:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:51.107 13:52:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:31:51.107 13:52:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:31:51.107 { 00:31:51.107 "subsystems": [ 00:31:51.107 { 00:31:51.107 "subsystem": "bdev", 00:31:51.107 "config": [ 00:31:51.107 { 00:31:51.107 "params": { 00:31:51.107 "block_size": 512, 00:31:51.107 "num_blocks": 1048576, 00:31:51.107 "name": "malloc0" 00:31:51.107 }, 00:31:51.107 "method": "bdev_malloc_create" 00:31:51.107 }, 00:31:51.107 { 00:31:51.107 "params": { 00:31:51.107 "filename": "/dev/zram1", 00:31:51.107 "name": "uring0" 00:31:51.107 }, 00:31:51.107 "method": "bdev_uring_create" 00:31:51.107 }, 00:31:51.107 { 00:31:51.107 "params": { 00:31:51.107 "name": "uring0" 00:31:51.107 }, 00:31:51.107 "method": "bdev_uring_delete" 00:31:51.107 }, 00:31:51.107 { 00:31:51.107 "method": "bdev_wait_for_examine" 00:31:51.107 } 00:31:51.107 ] 00:31:51.107 } 00:31:51.107 ] 00:31:51.107 } 00:31:51.107 [2024-11-20 13:52:48.200688] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:31:51.107 [2024-11-20 13:52:48.200810] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61544 ] 00:31:51.107 [2024-11-20 13:52:48.354459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:51.366 [2024-11-20 13:52:48.435790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:51.366 [2024-11-20 13:52:48.481737] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:51.624 [2024-11-20 13:52:48.717459] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:31:51.624 [2024-11-20 13:52:48.717522] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:31:51.624 [2024-11-20 13:52:48.717530] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:31:51.624 [2024-11-20 13:52:48.717539] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:51.884 [2024-11-20 13:52:49.090486] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:31:51.884 13:52:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # es=237 00:31:51.884 13:52:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:51.884 13:52:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@664 -- # es=109 00:31:51.884 13:52:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@665 -- # case "$es" in 00:31:51.884 13:52:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@672 -- # es=1 00:31:51.884 13:52:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:51.884 13:52:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:31:51.884 13:52:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:31:51.884 13:52:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:31:51.884 13:52:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:31:51.884 13:52:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:31:51.884 13:52:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:31:52.143 00:31:52.143 real 0m15.407s 00:31:52.143 user 0m10.525s 00:31:52.143 sys 0m12.953s 00:31:52.143 13:52:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:52.143 13:52:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:31:52.143 ************************************ 00:31:52.143 END TEST dd_uring_copy 00:31:52.143 ************************************ 00:31:52.143 ************************************ 00:31:52.143 END TEST spdk_dd_uring 00:31:52.143 ************************************ 00:31:52.143 00:31:52.143 real 0m15.702s 00:31:52.143 user 0m10.679s 00:31:52.143 sys 0m13.112s 00:31:52.143 13:52:49 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:52.143 13:52:49 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:31:52.402 13:52:49 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:31:52.402 13:52:49 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:52.402 13:52:49 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:52.402 13:52:49 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:31:52.402 ************************************ 00:31:52.402 START TEST spdk_dd_sparse 00:31:52.402 ************************************ 00:31:52.402 13:52:49 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:31:52.402 * Looking for test storage... 00:31:52.402 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:31:52.402 13:52:49 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:52.402 13:52:49 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:52.402 13:52:49 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # lcov --version 00:31:52.402 13:52:49 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:52.402 13:52:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:52.402 13:52:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:52.402 13:52:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:52.402 13:52:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:31:52.402 13:52:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:31:52.402 13:52:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:31:52.402 13:52:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:31:52.402 13:52:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:31:52.402 13:52:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:31:52.402 13:52:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:31:52.402 13:52:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:52.402 13:52:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:31:52.402 13:52:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:31:52.402 13:52:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:52.402 13:52:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:52.402 13:52:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:31:52.402 13:52:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:31:52.402 13:52:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:52.402 13:52:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:31:52.402 13:52:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:31:52.402 13:52:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:31:52.402 13:52:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:31:52.402 13:52:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:52.402 13:52:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:31:52.402 13:52:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:31:52.402 13:52:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:52.402 13:52:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:52.402 13:52:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:31:52.402 13:52:49 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:52.402 13:52:49 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:52.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:52.402 --rc genhtml_branch_coverage=1 00:31:52.402 --rc genhtml_function_coverage=1 00:31:52.402 --rc genhtml_legend=1 00:31:52.402 --rc geninfo_all_blocks=1 00:31:52.402 --rc geninfo_unexecuted_blocks=1 00:31:52.402 00:31:52.402 ' 00:31:52.402 13:52:49 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:52.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:52.402 --rc genhtml_branch_coverage=1 00:31:52.402 --rc genhtml_function_coverage=1 00:31:52.402 --rc genhtml_legend=1 00:31:52.402 --rc geninfo_all_blocks=1 00:31:52.402 --rc geninfo_unexecuted_blocks=1 00:31:52.402 00:31:52.402 ' 00:31:52.402 13:52:49 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:52.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:52.402 --rc genhtml_branch_coverage=1 00:31:52.402 --rc genhtml_function_coverage=1 00:31:52.402 --rc genhtml_legend=1 00:31:52.402 --rc geninfo_all_blocks=1 00:31:52.402 --rc geninfo_unexecuted_blocks=1 00:31:52.402 00:31:52.402 ' 00:31:52.402 13:52:49 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:52.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:52.402 --rc genhtml_branch_coverage=1 00:31:52.402 --rc genhtml_function_coverage=1 00:31:52.402 --rc genhtml_legend=1 00:31:52.402 --rc geninfo_all_blocks=1 00:31:52.402 --rc geninfo_unexecuted_blocks=1 00:31:52.402 00:31:52.402 ' 00:31:52.402 13:52:49 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:52.402 13:52:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:31:52.402 13:52:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:52.402 13:52:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:52.402 13:52:49 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:52.402 13:52:49 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:52.402 13:52:49 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:52.402 13:52:49 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:52.402 13:52:49 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:31:52.402 13:52:49 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:52.402 13:52:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:31:52.402 13:52:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:31:52.402 13:52:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:31:52.402 13:52:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:31:52.402 13:52:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:31:52.402 13:52:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:31:52.402 13:52:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:31:52.402 13:52:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:31:52.402 13:52:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:31:52.403 13:52:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:31:52.671 13:52:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:31:52.671 1+0 records in 00:31:52.671 1+0 records out 00:31:52.671 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00663879 s, 632 MB/s 00:31:52.671 13:52:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:31:52.671 1+0 records in 00:31:52.671 1+0 records out 00:31:52.671 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0108442 s, 387 MB/s 00:31:52.671 13:52:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:31:52.671 1+0 records in 00:31:52.671 1+0 records out 00:31:52.671 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00657102 s, 638 MB/s 00:31:52.671 13:52:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:31:52.671 13:52:49 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:52.671 13:52:49 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:52.671 13:52:49 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:31:52.671 ************************************ 00:31:52.671 START TEST dd_sparse_file_to_file 00:31:52.671 ************************************ 00:31:52.671 13:52:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1129 -- # file_to_file 00:31:52.671 13:52:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:31:52.671 13:52:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:31:52.671 13:52:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:31:52.671 13:52:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:31:52.671 13:52:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:31:52.671 13:52:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:31:52.671 13:52:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:31:52.671 13:52:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:31:52.671 13:52:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:31:52.671 13:52:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:31:52.671 [2024-11-20 13:52:49.834650] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:31:52.671 [2024-11-20 13:52:49.834767] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61642 ] 00:31:52.671 { 00:31:52.671 "subsystems": [ 00:31:52.671 { 00:31:52.671 "subsystem": "bdev", 00:31:52.671 "config": [ 00:31:52.671 { 00:31:52.671 "params": { 00:31:52.671 "block_size": 4096, 00:31:52.671 "filename": "dd_sparse_aio_disk", 00:31:52.671 "name": "dd_aio" 00:31:52.671 }, 00:31:52.671 "method": "bdev_aio_create" 00:31:52.671 }, 00:31:52.671 { 00:31:52.671 "params": { 00:31:52.671 "lvs_name": "dd_lvstore", 00:31:52.671 "bdev_name": "dd_aio" 00:31:52.671 }, 00:31:52.671 "method": "bdev_lvol_create_lvstore" 00:31:52.671 }, 00:31:52.671 { 00:31:52.671 "method": "bdev_wait_for_examine" 00:31:52.671 } 00:31:52.671 ] 00:31:52.671 } 00:31:52.671 ] 00:31:52.672 } 00:31:52.672 [2024-11-20 13:52:49.985746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:52.941 [2024-11-20 13:52:50.070665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:52.941 [2024-11-20 13:52:50.114943] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:52.941  [2024-11-20T13:52:50.523Z] Copying: 12/36 [MB] (average 413 MBps) 00:31:53.200 00:31:53.200 13:52:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:31:53.200 13:52:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:31:53.200 13:52:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:31:53.200 13:52:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:31:53.200 13:52:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:31:53.200 13:52:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:31:53.200 13:52:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:31:53.200 13:52:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:31:53.200 13:52:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:31:53.200 13:52:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:31:53.200 00:31:53.200 real 0m0.695s 00:31:53.200 user 0m0.439s 00:31:53.200 sys 0m0.365s 00:31:53.200 13:52:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:53.200 13:52:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:31:53.200 ************************************ 00:31:53.200 END TEST dd_sparse_file_to_file 00:31:53.200 ************************************ 00:31:53.200 13:52:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:31:53.200 13:52:50 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:53.200 13:52:50 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:53.200 13:52:50 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:31:53.460 ************************************ 00:31:53.460 START TEST dd_sparse_file_to_bdev 00:31:53.460 ************************************ 00:31:53.460 13:52:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1129 -- # file_to_bdev 00:31:53.460 13:52:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:31:53.460 13:52:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:31:53.460 13:52:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:31:53.460 13:52:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:31:53.460 13:52:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:31:53.460 13:52:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:31:53.460 13:52:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:31:53.460 13:52:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:31:53.460 [2024-11-20 13:52:50.587546] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:31:53.460 [2024-11-20 13:52:50.587625] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61690 ] 00:31:53.460 { 00:31:53.460 "subsystems": [ 00:31:53.460 { 00:31:53.460 "subsystem": "bdev", 00:31:53.460 "config": [ 00:31:53.460 { 00:31:53.460 "params": { 00:31:53.460 "block_size": 4096, 00:31:53.460 "filename": "dd_sparse_aio_disk", 00:31:53.460 "name": "dd_aio" 00:31:53.460 }, 00:31:53.460 "method": "bdev_aio_create" 00:31:53.460 }, 00:31:53.460 { 00:31:53.460 "params": { 00:31:53.460 "lvs_name": "dd_lvstore", 00:31:53.460 "lvol_name": "dd_lvol", 00:31:53.460 "size_in_mib": 36, 00:31:53.460 "thin_provision": true 00:31:53.460 }, 00:31:53.460 "method": "bdev_lvol_create" 00:31:53.460 }, 00:31:53.460 { 00:31:53.460 "method": "bdev_wait_for_examine" 00:31:53.460 } 00:31:53.460 ] 00:31:53.460 } 00:31:53.460 ] 00:31:53.460 } 00:31:53.460 [2024-11-20 13:52:50.735199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:53.720 [2024-11-20 13:52:50.805928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:53.720 [2024-11-20 13:52:50.853781] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:53.720  [2024-11-20T13:52:51.302Z] Copying: 12/36 [MB] (average 444 MBps) 00:31:53.979 00:31:53.979 00:31:53.979 real 0m0.646s 00:31:53.979 user 0m0.424s 00:31:53.979 sys 0m0.328s 00:31:53.979 13:52:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:53.979 13:52:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:31:53.979 ************************************ 00:31:53.979 END TEST dd_sparse_file_to_bdev 00:31:53.979 ************************************ 00:31:53.979 13:52:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:31:53.979 13:52:51 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:53.979 13:52:51 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:53.979 13:52:51 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:31:53.979 ************************************ 00:31:53.979 START TEST dd_sparse_bdev_to_file 00:31:53.979 ************************************ 00:31:53.979 13:52:51 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1129 -- # bdev_to_file 00:31:53.979 13:52:51 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:31:53.979 13:52:51 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:31:53.980 13:52:51 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:31:53.980 13:52:51 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:31:53.980 13:52:51 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:31:53.980 13:52:51 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:31:53.980 13:52:51 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:31:53.980 13:52:51 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:31:53.980 [2024-11-20 13:52:51.297932] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:31:53.980 [2024-11-20 13:52:51.298043] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61722 ] 00:31:53.980 { 00:31:53.980 "subsystems": [ 00:31:53.980 { 00:31:53.980 "subsystem": "bdev", 00:31:53.980 "config": [ 00:31:53.980 { 00:31:53.980 "params": { 00:31:53.980 "block_size": 4096, 00:31:53.980 "filename": "dd_sparse_aio_disk", 00:31:53.980 "name": "dd_aio" 00:31:53.980 }, 00:31:53.980 "method": "bdev_aio_create" 00:31:53.980 }, 00:31:53.980 { 00:31:53.980 "method": "bdev_wait_for_examine" 00:31:53.980 } 00:31:53.980 ] 00:31:53.980 } 00:31:53.980 ] 00:31:53.980 } 00:31:54.238 [2024-11-20 13:52:51.444260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:54.238 [2024-11-20 13:52:51.528523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:54.498 [2024-11-20 13:52:51.572669] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:54.498  [2024-11-20T13:52:52.081Z] Copying: 12/36 [MB] (average 857 MBps) 00:31:54.758 00:31:54.758 13:52:51 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:31:54.758 13:52:51 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:31:54.758 13:52:51 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:31:54.758 13:52:51 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:31:54.758 13:52:51 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:31:54.758 13:52:51 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:31:54.758 13:52:51 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:31:54.758 13:52:51 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:31:54.758 13:52:51 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:31:54.758 13:52:51 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:31:54.758 00:31:54.758 real 0m0.660s 00:31:54.758 user 0m0.405s 00:31:54.758 sys 0m0.345s 00:31:54.758 13:52:51 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:54.758 13:52:51 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:31:54.758 ************************************ 00:31:54.758 END TEST dd_sparse_bdev_to_file 00:31:54.758 ************************************ 00:31:54.758 13:52:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:31:54.758 13:52:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:31:54.758 13:52:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:31:54.758 13:52:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:31:54.758 13:52:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:31:54.758 00:31:54.758 real 0m2.491s 00:31:54.758 user 0m1.465s 00:31:54.758 sys 0m1.350s 00:31:54.758 13:52:51 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:54.758 13:52:51 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:31:54.758 ************************************ 00:31:54.758 END TEST spdk_dd_sparse 00:31:54.758 ************************************ 00:31:54.758 13:52:52 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:31:54.758 13:52:52 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:54.758 13:52:52 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:54.758 13:52:52 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:31:54.758 ************************************ 00:31:54.758 START TEST spdk_dd_negative 00:31:54.758 ************************************ 00:31:54.758 13:52:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:31:55.018 * Looking for test storage... 00:31:55.018 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:31:55.018 13:52:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:55.018 13:52:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # lcov --version 00:31:55.018 13:52:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:55.018 13:52:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:55.018 13:52:52 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:55.018 13:52:52 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:55.018 13:52:52 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:55.018 13:52:52 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:31:55.018 13:52:52 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:31:55.018 13:52:52 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:31:55.018 13:52:52 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:31:55.018 13:52:52 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:31:55.018 13:52:52 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:31:55.018 13:52:52 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:31:55.018 13:52:52 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:55.018 13:52:52 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:31:55.018 13:52:52 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:31:55.018 13:52:52 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:55.018 13:52:52 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:55.018 13:52:52 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:31:55.018 13:52:52 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:31:55.018 13:52:52 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:55.018 13:52:52 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:31:55.018 13:52:52 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:31:55.018 13:52:52 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:31:55.018 13:52:52 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:31:55.018 13:52:52 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:55.018 13:52:52 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:31:55.018 13:52:52 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:31:55.018 13:52:52 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:55.018 13:52:52 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:55.018 13:52:52 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:31:55.018 13:52:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:55.018 13:52:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:55.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:55.018 --rc genhtml_branch_coverage=1 00:31:55.018 --rc genhtml_function_coverage=1 00:31:55.018 --rc genhtml_legend=1 00:31:55.018 --rc geninfo_all_blocks=1 00:31:55.018 --rc geninfo_unexecuted_blocks=1 00:31:55.018 00:31:55.018 ' 00:31:55.018 13:52:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:55.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:55.018 --rc genhtml_branch_coverage=1 00:31:55.018 --rc genhtml_function_coverage=1 00:31:55.018 --rc genhtml_legend=1 00:31:55.018 --rc geninfo_all_blocks=1 00:31:55.018 --rc geninfo_unexecuted_blocks=1 00:31:55.018 00:31:55.018 ' 00:31:55.018 13:52:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:55.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:55.018 --rc genhtml_branch_coverage=1 00:31:55.018 --rc genhtml_function_coverage=1 00:31:55.018 --rc genhtml_legend=1 00:31:55.018 --rc geninfo_all_blocks=1 00:31:55.018 --rc geninfo_unexecuted_blocks=1 00:31:55.018 00:31:55.018 ' 00:31:55.018 13:52:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:55.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:55.018 --rc genhtml_branch_coverage=1 00:31:55.018 --rc genhtml_function_coverage=1 00:31:55.018 --rc genhtml_legend=1 00:31:55.018 --rc geninfo_all_blocks=1 00:31:55.018 --rc geninfo_unexecuted_blocks=1 00:31:55.018 00:31:55.018 ' 00:31:55.018 13:52:52 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:55.018 13:52:52 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:31:55.018 13:52:52 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:55.018 13:52:52 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:55.018 13:52:52 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:55.018 13:52:52 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.018 13:52:52 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.019 13:52:52 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.019 13:52:52 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:31:55.019 13:52:52 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.019 13:52:52 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:31:55.019 13:52:52 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:31:55.019 13:52:52 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:31:55.019 13:52:52 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:31:55.019 13:52:52 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:31:55.019 13:52:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:55.019 13:52:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:55.019 13:52:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:31:55.019 ************************************ 00:31:55.019 START TEST dd_invalid_arguments 00:31:55.019 ************************************ 00:31:55.019 13:52:52 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1129 -- # invalid_arguments 00:31:55.019 13:52:52 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:31:55.019 13:52:52 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # local es=0 00:31:55.019 13:52:52 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:31:55.019 13:52:52 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:55.019 13:52:52 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:55.019 13:52:52 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:55.019 13:52:52 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:55.019 13:52:52 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:55.019 13:52:52 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:55.019 13:52:52 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:55.019 13:52:52 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:31:55.019 13:52:52 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:31:55.279 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:31:55.279 00:31:55.279 CPU options: 00:31:55.279 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:31:55.279 (like [0,1,10]) 00:31:55.279 --lcores lcore to CPU mapping list. The list is in the format: 00:31:55.279 [<,lcores[@CPUs]>...] 00:31:55.279 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:31:55.279 Within the group, '-' is used for range separator, 00:31:55.279 ',' is used for single number separator. 00:31:55.279 '( )' can be omitted for single element group, 00:31:55.279 '@' can be omitted if cpus and lcores have the same value 00:31:55.279 --disable-cpumask-locks Disable CPU core lock files. 00:31:55.279 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:31:55.279 pollers in the app support interrupt mode) 00:31:55.279 -p, --main-core main (primary) core for DPDK 00:31:55.279 00:31:55.279 Configuration options: 00:31:55.279 -c, --config, --json JSON config file 00:31:55.279 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:31:55.279 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:31:55.279 --wait-for-rpc wait for RPCs to initialize subsystems 00:31:55.279 --rpcs-allowed comma-separated list of permitted RPCS 00:31:55.279 --json-ignore-init-errors don't exit on invalid config entry 00:31:55.279 00:31:55.279 Memory options: 00:31:55.279 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:31:55.279 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:31:55.279 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:31:55.279 -R, --huge-unlink unlink huge files after initialization 00:31:55.279 -n, --mem-channels number of memory channels used for DPDK 00:31:55.279 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:31:55.279 --msg-mempool-size global message memory pool size in count (default: 262143) 00:31:55.279 --no-huge run without using hugepages 00:31:55.279 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:31:55.279 -i, --shm-id shared memory ID (optional) 00:31:55.279 -g, --single-file-segments force creating just one hugetlbfs file 00:31:55.279 00:31:55.279 PCI options: 00:31:55.279 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:31:55.279 -B, --pci-blocked pci addr to block (can be used more than once) 00:31:55.279 -u, --no-pci disable PCI access 00:31:55.279 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:31:55.279 00:31:55.279 Log options: 00:31:55.279 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:31:55.279 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:31:55.279 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:31:55.279 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:31:55.279 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:31:55.279 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:31:55.279 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:31:55.279 sock_posix, spdk_aio_mgr_io, thread, trace, uring, vbdev_delay, 00:31:55.279 vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, 00:31:55.279 vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, 00:31:55.279 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:31:55.279 --silence-noticelog disable notice level logging to stderr 00:31:55.279 00:31:55.279 Trace options: 00:31:55.279 --num-trace-entries number of trace entries for each core, must be power of 2, 00:31:55.279 setting 0 to disable trace (default 32768) 00:31:55.279 Tracepoints vary in size and can use more than one trace entry. 00:31:55.279 -e, --tpoint-group [:] 00:31:55.279 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:31:55.279 [2024-11-20 13:52:52.363542] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:31:55.279 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:31:55.279 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 00:31:55.279 bdev_raid, scheduler, all). 00:31:55.279 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:31:55.279 a tracepoint group. First tpoint inside a group can be enabled by 00:31:55.279 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:31:55.279 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:31:55.279 in /include/spdk_internal/trace_defs.h 00:31:55.279 00:31:55.279 Other options: 00:31:55.279 -h, --help show this usage 00:31:55.279 -v, --version print SPDK version 00:31:55.279 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:31:55.279 --env-context Opaque context for use of the env implementation 00:31:55.279 00:31:55.279 Application specific: 00:31:55.279 [--------- DD Options ---------] 00:31:55.279 --if Input file. Must specify either --if or --ib. 00:31:55.279 --ib Input bdev. Must specifier either --if or --ib 00:31:55.279 --of Output file. Must specify either --of or --ob. 00:31:55.279 --ob Output bdev. Must specify either --of or --ob. 00:31:55.279 --iflag Input file flags. 00:31:55.279 --oflag Output file flags. 00:31:55.279 --bs I/O unit size (default: 4096) 00:31:55.279 --qd Queue depth (default: 2) 00:31:55.279 --count I/O unit count. The number of I/O units to copy. (default: all) 00:31:55.279 --skip Skip this many I/O units at start of input. (default: 0) 00:31:55.279 --seek Skip this many I/O units at start of output. (default: 0) 00:31:55.279 --aio Force usage of AIO. (by default io_uring is used if available) 00:31:55.279 --sparse Enable hole skipping in input target 00:31:55.279 Available iflag and oflag values: 00:31:55.279 append - append mode 00:31:55.279 direct - use direct I/O for data 00:31:55.279 directory - fail unless a directory 00:31:55.279 dsync - use synchronized I/O for data 00:31:55.279 noatime - do not update access time 00:31:55.279 noctty - do not assign controlling terminal from file 00:31:55.279 nofollow - do not follow symlinks 00:31:55.279 nonblock - use non-blocking I/O 00:31:55.279 sync - use synchronized I/O for data and metadata 00:31:55.279 13:52:52 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # es=2 00:31:55.279 13:52:52 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:55.279 13:52:52 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:55.279 13:52:52 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:55.279 00:31:55.279 real 0m0.078s 00:31:55.279 user 0m0.046s 00:31:55.279 sys 0m0.030s 00:31:55.279 13:52:52 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:55.279 13:52:52 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:31:55.279 ************************************ 00:31:55.279 END TEST dd_invalid_arguments 00:31:55.279 ************************************ 00:31:55.279 13:52:52 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:31:55.279 13:52:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:55.279 13:52:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:55.279 13:52:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:31:55.279 ************************************ 00:31:55.279 START TEST dd_double_input 00:31:55.279 ************************************ 00:31:55.279 13:52:52 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1129 -- # double_input 00:31:55.279 13:52:52 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:31:55.279 13:52:52 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # local es=0 00:31:55.279 13:52:52 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:31:55.279 13:52:52 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:55.279 13:52:52 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:55.279 13:52:52 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:55.279 13:52:52 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:55.279 13:52:52 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:55.279 13:52:52 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:55.279 13:52:52 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:55.279 13:52:52 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:31:55.279 13:52:52 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:31:55.280 [2024-11-20 13:52:52.498770] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:31:55.280 13:52:52 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # es=22 00:31:55.280 13:52:52 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:55.280 13:52:52 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:55.280 13:52:52 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:55.280 00:31:55.280 real 0m0.080s 00:31:55.280 user 0m0.046s 00:31:55.280 sys 0m0.032s 00:31:55.280 13:52:52 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:55.280 13:52:52 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:31:55.280 ************************************ 00:31:55.280 END TEST dd_double_input 00:31:55.280 ************************************ 00:31:55.280 13:52:52 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:31:55.280 13:52:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:55.280 13:52:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:55.280 13:52:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:31:55.280 ************************************ 00:31:55.280 START TEST dd_double_output 00:31:55.280 ************************************ 00:31:55.280 13:52:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1129 -- # double_output 00:31:55.280 13:52:52 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:31:55.280 13:52:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # local es=0 00:31:55.280 13:52:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:31:55.280 13:52:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:55.280 13:52:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:55.280 13:52:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:55.280 13:52:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:55.280 13:52:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:55.280 13:52:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:55.280 13:52:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:55.280 13:52:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:31:55.280 13:52:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:31:55.539 [2024-11-20 13:52:52.634036] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:31:55.539 13:52:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # es=22 00:31:55.540 13:52:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:55.540 13:52:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:55.540 13:52:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:55.540 00:31:55.540 real 0m0.074s 00:31:55.540 user 0m0.044s 00:31:55.540 sys 0m0.030s 00:31:55.540 13:52:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:55.540 13:52:52 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:31:55.540 ************************************ 00:31:55.540 END TEST dd_double_output 00:31:55.540 ************************************ 00:31:55.540 13:52:52 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:31:55.540 13:52:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:55.540 13:52:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:55.540 13:52:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:31:55.540 ************************************ 00:31:55.540 START TEST dd_no_input 00:31:55.540 ************************************ 00:31:55.540 13:52:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1129 -- # no_input 00:31:55.540 13:52:52 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:31:55.540 13:52:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # local es=0 00:31:55.540 13:52:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:31:55.540 13:52:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:55.540 13:52:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:55.540 13:52:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:55.540 13:52:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:55.540 13:52:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:55.540 13:52:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:55.540 13:52:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:55.540 13:52:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:31:55.540 13:52:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:31:55.540 [2024-11-20 13:52:52.764548] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:31:55.540 13:52:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # es=22 00:31:55.540 13:52:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:55.540 13:52:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:55.540 13:52:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:55.540 00:31:55.540 real 0m0.072s 00:31:55.540 user 0m0.044s 00:31:55.540 sys 0m0.027s 00:31:55.540 13:52:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:55.540 13:52:52 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:31:55.540 ************************************ 00:31:55.540 END TEST dd_no_input 00:31:55.540 ************************************ 00:31:55.540 13:52:52 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:31:55.540 13:52:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:55.540 13:52:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:55.540 13:52:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:31:55.540 ************************************ 00:31:55.540 START TEST dd_no_output 00:31:55.540 ************************************ 00:31:55.540 13:52:52 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1129 -- # no_output 00:31:55.540 13:52:52 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:31:55.540 13:52:52 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # local es=0 00:31:55.540 13:52:52 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:31:55.540 13:52:52 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:55.540 13:52:52 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:55.540 13:52:52 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:55.540 13:52:52 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:55.540 13:52:52 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:55.540 13:52:52 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:55.540 13:52:52 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:55.540 13:52:52 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:31:55.540 13:52:52 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:31:55.800 [2024-11-20 13:52:52.896944] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:31:55.800 13:52:52 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # es=22 00:31:55.800 13:52:52 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:55.800 13:52:52 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:55.800 13:52:52 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:55.800 00:31:55.800 real 0m0.076s 00:31:55.800 user 0m0.043s 00:31:55.800 sys 0m0.032s 00:31:55.800 13:52:52 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:55.800 13:52:52 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:31:55.800 ************************************ 00:31:55.800 END TEST dd_no_output 00:31:55.800 ************************************ 00:31:55.800 13:52:52 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:31:55.800 13:52:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:55.800 13:52:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:55.800 13:52:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:31:55.800 ************************************ 00:31:55.800 START TEST dd_wrong_blocksize 00:31:55.800 ************************************ 00:31:55.800 13:52:52 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1129 -- # wrong_blocksize 00:31:55.800 13:52:52 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:31:55.800 13:52:52 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:31:55.800 13:52:52 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:31:55.800 13:52:52 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:55.800 13:52:52 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:55.800 13:52:52 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:55.800 13:52:52 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:55.800 13:52:52 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:55.800 13:52:52 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:55.800 13:52:52 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:55.801 13:52:52 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:31:55.801 13:52:52 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:31:55.801 [2024-11-20 13:52:53.030694] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:31:55.801 13:52:53 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # es=22 00:31:55.801 13:52:53 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:55.801 13:52:53 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:55.801 13:52:53 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:55.801 00:31:55.801 real 0m0.074s 00:31:55.801 user 0m0.047s 00:31:55.801 sys 0m0.026s 00:31:55.801 13:52:53 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:55.801 13:52:53 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:31:55.801 ************************************ 00:31:55.801 END TEST dd_wrong_blocksize 00:31:55.801 ************************************ 00:31:55.801 13:52:53 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:31:55.801 13:52:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:55.801 13:52:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:55.801 13:52:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:31:55.801 ************************************ 00:31:55.801 START TEST dd_smaller_blocksize 00:31:55.801 ************************************ 00:31:55.801 13:52:53 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1129 -- # smaller_blocksize 00:31:55.801 13:52:53 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:31:55.801 13:52:53 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:31:55.801 13:52:53 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:31:55.801 13:52:53 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:55.801 13:52:53 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:55.801 13:52:53 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:55.801 13:52:53 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:55.801 13:52:53 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:55.801 13:52:53 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:55.801 13:52:53 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:55.801 13:52:53 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:31:55.801 13:52:53 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:31:56.061 [2024-11-20 13:52:53.170163] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:31:56.061 [2024-11-20 13:52:53.170263] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61954 ] 00:31:56.061 [2024-11-20 13:52:53.318932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:56.320 [2024-11-20 13:52:53.409045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:56.320 [2024-11-20 13:52:53.464557] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:56.580 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:31:56.838 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:31:56.838 [2024-11-20 13:52:54.111436] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:31:56.838 [2024-11-20 13:52:54.111602] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:57.098 [2024-11-20 13:52:54.224823] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:31:57.098 13:52:54 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # es=244 00:31:57.098 13:52:54 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:57.098 13:52:54 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@664 -- # es=116 00:31:57.098 13:52:54 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@665 -- # case "$es" in 00:31:57.098 13:52:54 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@672 -- # es=1 00:31:57.098 13:52:54 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:57.098 00:31:57.098 real 0m1.188s 00:31:57.098 user 0m0.395s 00:31:57.098 sys 0m0.684s 00:31:57.098 13:52:54 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:57.098 13:52:54 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:31:57.098 ************************************ 00:31:57.098 END TEST dd_smaller_blocksize 00:31:57.098 ************************************ 00:31:57.098 13:52:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:31:57.098 13:52:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:57.098 13:52:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:57.098 13:52:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:31:57.098 ************************************ 00:31:57.098 START TEST dd_invalid_count 00:31:57.098 ************************************ 00:31:57.098 13:52:54 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1129 -- # invalid_count 00:31:57.098 13:52:54 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:31:57.098 13:52:54 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # local es=0 00:31:57.098 13:52:54 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:31:57.098 13:52:54 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:57.098 13:52:54 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:57.098 13:52:54 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:57.098 13:52:54 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:57.098 13:52:54 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:57.098 13:52:54 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:57.098 13:52:54 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:57.098 13:52:54 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:31:57.098 13:52:54 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:31:57.098 [2024-11-20 13:52:54.417128] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:31:57.359 13:52:54 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # es=22 00:31:57.359 13:52:54 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:57.359 13:52:54 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:57.359 13:52:54 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:57.359 00:31:57.359 real 0m0.076s 00:31:57.359 user 0m0.038s 00:31:57.359 sys 0m0.035s 00:31:57.359 13:52:54 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:57.359 ************************************ 00:31:57.359 END TEST dd_invalid_count 00:31:57.359 ************************************ 00:31:57.359 13:52:54 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:31:57.359 13:52:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:31:57.359 13:52:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:57.359 13:52:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:57.359 13:52:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:31:57.359 ************************************ 00:31:57.359 START TEST dd_invalid_oflag 00:31:57.359 ************************************ 00:31:57.359 13:52:54 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1129 -- # invalid_oflag 00:31:57.359 13:52:54 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:31:57.359 13:52:54 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # local es=0 00:31:57.359 13:52:54 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:31:57.359 13:52:54 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:57.359 13:52:54 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:57.359 13:52:54 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:57.359 13:52:54 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:57.359 13:52:54 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:57.359 13:52:54 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:57.359 13:52:54 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:57.359 13:52:54 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:31:57.359 13:52:54 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:31:57.359 [2024-11-20 13:52:54.550389] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:31:57.359 13:52:54 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # es=22 00:31:57.359 13:52:54 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:57.359 13:52:54 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:57.359 13:52:54 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:57.359 00:31:57.359 real 0m0.078s 00:31:57.359 user 0m0.041s 00:31:57.359 sys 0m0.035s 00:31:57.359 13:52:54 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:57.359 13:52:54 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:31:57.359 ************************************ 00:31:57.359 END TEST dd_invalid_oflag 00:31:57.359 ************************************ 00:31:57.359 13:52:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:31:57.359 13:52:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:57.359 13:52:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:57.359 13:52:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:31:57.359 ************************************ 00:31:57.359 START TEST dd_invalid_iflag 00:31:57.359 ************************************ 00:31:57.359 13:52:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1129 -- # invalid_iflag 00:31:57.359 13:52:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:31:57.359 13:52:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # local es=0 00:31:57.359 13:52:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:31:57.359 13:52:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:57.359 13:52:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:57.359 13:52:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:57.359 13:52:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:57.359 13:52:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:57.359 13:52:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:57.359 13:52:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:57.359 13:52:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:31:57.359 13:52:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:31:57.620 [2024-11-20 13:52:54.689825] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:31:57.620 13:52:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # es=22 00:31:57.620 13:52:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:57.620 13:52:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:57.620 13:52:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:57.620 00:31:57.620 real 0m0.080s 00:31:57.620 user 0m0.041s 00:31:57.620 sys 0m0.038s 00:31:57.620 13:52:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:57.620 13:52:54 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:31:57.620 ************************************ 00:31:57.620 END TEST dd_invalid_iflag 00:31:57.620 ************************************ 00:31:57.620 13:52:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:31:57.620 13:52:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:57.620 13:52:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:57.620 13:52:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:31:57.620 ************************************ 00:31:57.620 START TEST dd_unknown_flag 00:31:57.620 ************************************ 00:31:57.620 13:52:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1129 -- # unknown_flag 00:31:57.620 13:52:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:31:57.620 13:52:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # local es=0 00:31:57.620 13:52:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:31:57.620 13:52:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:57.620 13:52:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:57.620 13:52:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:57.620 13:52:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:57.620 13:52:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:57.620 13:52:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:57.620 13:52:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:57.620 13:52:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:31:57.620 13:52:54 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:31:57.620 [2024-11-20 13:52:54.833243] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:31:57.620 [2024-11-20 13:52:54.833319] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62052 ] 00:31:57.880 [2024-11-20 13:52:54.977697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:57.880 [2024-11-20 13:52:55.029877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:57.880 [2024-11-20 13:52:55.106785] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:57.880 [2024-11-20 13:52:55.158177] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:31:57.880 [2024-11-20 13:52:55.158232] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:57.880 [2024-11-20 13:52:55.158290] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:31:57.880 [2024-11-20 13:52:55.158299] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:57.880 [2024-11-20 13:52:55.158530] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:31:57.880 [2024-11-20 13:52:55.158557] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:57.880 [2024-11-20 13:52:55.158615] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:31:57.880 [2024-11-20 13:52:55.158651] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:31:58.140 [2024-11-20 13:52:55.341485] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:31:58.140 13:52:55 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # es=234 00:31:58.140 13:52:55 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:58.140 13:52:55 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@664 -- # es=106 00:31:58.140 13:52:55 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@665 -- # case "$es" in 00:31:58.140 13:52:55 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@672 -- # es=1 00:31:58.140 13:52:55 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:58.140 00:31:58.140 real 0m0.632s 00:31:58.140 user 0m0.351s 00:31:58.140 sys 0m0.184s 00:31:58.140 13:52:55 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:58.140 13:52:55 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:31:58.140 ************************************ 00:31:58.140 END TEST dd_unknown_flag 00:31:58.140 ************************************ 00:31:58.140 13:52:55 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:31:58.140 13:52:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:58.140 13:52:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:58.140 13:52:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:31:58.400 ************************************ 00:31:58.400 START TEST dd_invalid_json 00:31:58.400 ************************************ 00:31:58.400 13:52:55 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1129 -- # invalid_json 00:31:58.400 13:52:55 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:31:58.400 13:52:55 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:31:58.400 13:52:55 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # local es=0 00:31:58.400 13:52:55 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:31:58.400 13:52:55 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:58.400 13:52:55 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:58.400 13:52:55 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:58.400 13:52:55 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:58.400 13:52:55 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:58.400 13:52:55 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:58.400 13:52:55 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:58.400 13:52:55 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:31:58.400 13:52:55 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:31:58.400 [2024-11-20 13:52:55.530195] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:31:58.400 [2024-11-20 13:52:55.530270] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62080 ] 00:31:58.400 [2024-11-20 13:52:55.678498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:58.660 [2024-11-20 13:52:55.728767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:58.660 [2024-11-20 13:52:55.728838] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:31:58.660 [2024-11-20 13:52:55.728850] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:31:58.660 [2024-11-20 13:52:55.728857] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:58.660 [2024-11-20 13:52:55.728889] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:31:58.660 13:52:55 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # es=234 00:31:58.660 13:52:55 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:58.660 13:52:55 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@664 -- # es=106 00:31:58.660 13:52:55 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@665 -- # case "$es" in 00:31:58.660 13:52:55 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@672 -- # es=1 00:31:58.660 13:52:55 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:58.660 00:31:58.660 real 0m0.317s 00:31:58.660 user 0m0.154s 00:31:58.660 sys 0m0.063s 00:31:58.660 13:52:55 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:58.660 13:52:55 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:31:58.660 ************************************ 00:31:58.660 END TEST dd_invalid_json 00:31:58.660 ************************************ 00:31:58.660 13:52:55 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:31:58.660 13:52:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:58.660 13:52:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:58.660 13:52:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:31:58.660 ************************************ 00:31:58.660 START TEST dd_invalid_seek 00:31:58.660 ************************************ 00:31:58.660 13:52:55 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1129 -- # invalid_seek 00:31:58.660 13:52:55 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:31:58.660 13:52:55 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:31:58.660 13:52:55 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:31:58.660 13:52:55 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:31:58.660 13:52:55 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:31:58.660 13:52:55 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:31:58.660 13:52:55 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:31:58.660 13:52:55 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # local es=0 00:31:58.660 13:52:55 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:31:58.660 13:52:55 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:58.660 13:52:55 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:31:58.660 13:52:55 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:31:58.660 13:52:55 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:31:58.660 13:52:55 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:58.660 13:52:55 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:58.660 13:52:55 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:58.660 13:52:55 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:58.660 13:52:55 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:58.661 13:52:55 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:58.661 13:52:55 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:31:58.661 13:52:55 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:31:58.661 [2024-11-20 13:52:55.906544] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:31:58.661 [2024-11-20 13:52:55.906632] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62110 ] 00:31:58.661 { 00:31:58.661 "subsystems": [ 00:31:58.661 { 00:31:58.661 "subsystem": "bdev", 00:31:58.661 "config": [ 00:31:58.661 { 00:31:58.661 "params": { 00:31:58.661 "block_size": 512, 00:31:58.661 "num_blocks": 512, 00:31:58.661 "name": "malloc0" 00:31:58.661 }, 00:31:58.661 "method": "bdev_malloc_create" 00:31:58.661 }, 00:31:58.661 { 00:31:58.661 "params": { 00:31:58.661 "block_size": 512, 00:31:58.661 "num_blocks": 512, 00:31:58.661 "name": "malloc1" 00:31:58.661 }, 00:31:58.661 "method": "bdev_malloc_create" 00:31:58.661 }, 00:31:58.661 { 00:31:58.661 "method": "bdev_wait_for_examine" 00:31:58.661 } 00:31:58.661 ] 00:31:58.661 } 00:31:58.661 ] 00:31:58.661 } 00:31:58.919 [2024-11-20 13:52:56.038345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:58.919 [2024-11-20 13:52:56.094957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:58.919 [2024-11-20 13:52:56.173391] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:59.178 [2024-11-20 13:52:56.253429] spdk_dd.c:1145:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:31:59.178 [2024-11-20 13:52:56.253483] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:59.178 [2024-11-20 13:52:56.442380] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:31:59.437 13:52:56 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # es=228 00:31:59.437 13:52:56 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:59.437 13:52:56 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@664 -- # es=100 00:31:59.437 13:52:56 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@665 -- # case "$es" in 00:31:59.437 13:52:56 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@672 -- # es=1 00:31:59.437 13:52:56 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:59.437 00:31:59.437 real 0m0.659s 00:31:59.437 user 0m0.426s 00:31:59.437 sys 0m0.201s 00:31:59.437 13:52:56 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:59.437 13:52:56 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:31:59.437 ************************************ 00:31:59.437 END TEST dd_invalid_seek 00:31:59.437 ************************************ 00:31:59.437 13:52:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:31:59.437 13:52:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:59.437 13:52:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:59.437 13:52:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:31:59.437 ************************************ 00:31:59.437 START TEST dd_invalid_skip 00:31:59.437 ************************************ 00:31:59.437 13:52:56 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1129 -- # invalid_skip 00:31:59.437 13:52:56 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:31:59.437 13:52:56 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:31:59.437 13:52:56 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:31:59.437 13:52:56 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:31:59.437 13:52:56 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:31:59.437 13:52:56 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:31:59.437 13:52:56 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:31:59.437 13:52:56 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:31:59.437 13:52:56 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:31:59.437 13:52:56 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # local es=0 00:31:59.437 13:52:56 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:31:59.437 13:52:56 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:31:59.437 13:52:56 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:59.437 13:52:56 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:59.437 13:52:56 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:59.437 13:52:56 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:59.437 13:52:56 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:59.437 13:52:56 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:59.437 13:52:56 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:59.437 13:52:56 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:31:59.437 13:52:56 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:31:59.437 [2024-11-20 13:52:56.623423] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:31:59.437 [2024-11-20 13:52:56.623526] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62143 ] 00:31:59.437 { 00:31:59.437 "subsystems": [ 00:31:59.437 { 00:31:59.437 "subsystem": "bdev", 00:31:59.437 "config": [ 00:31:59.437 { 00:31:59.437 "params": { 00:31:59.437 "block_size": 512, 00:31:59.437 "num_blocks": 512, 00:31:59.437 "name": "malloc0" 00:31:59.437 }, 00:31:59.437 "method": "bdev_malloc_create" 00:31:59.437 }, 00:31:59.437 { 00:31:59.437 "params": { 00:31:59.437 "block_size": 512, 00:31:59.437 "num_blocks": 512, 00:31:59.437 "name": "malloc1" 00:31:59.437 }, 00:31:59.437 "method": "bdev_malloc_create" 00:31:59.437 }, 00:31:59.437 { 00:31:59.437 "method": "bdev_wait_for_examine" 00:31:59.437 } 00:31:59.437 ] 00:31:59.437 } 00:31:59.437 ] 00:31:59.437 } 00:31:59.697 [2024-11-20 13:52:56.788436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:59.697 [2024-11-20 13:52:56.836568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:59.697 [2024-11-20 13:52:56.910232] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:59.697 [2024-11-20 13:52:56.987754] spdk_dd.c:1102:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:31:59.697 [2024-11-20 13:52:56.987808] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:59.957 [2024-11-20 13:52:57.172100] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:31:59.957 13:52:57 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # es=228 00:31:59.957 13:52:57 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:59.957 13:52:57 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@664 -- # es=100 00:31:59.957 13:52:57 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@665 -- # case "$es" in 00:31:59.957 13:52:57 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@672 -- # es=1 00:31:59.957 13:52:57 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:59.957 00:31:59.957 real 0m0.673s 00:31:59.957 user 0m0.430s 00:31:59.957 sys 0m0.205s 00:31:59.957 13:52:57 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:59.957 13:52:57 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:31:59.957 ************************************ 00:31:59.957 END TEST dd_invalid_skip 00:31:59.957 ************************************ 00:32:00.218 13:52:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:32:00.218 13:52:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:00.218 13:52:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:00.218 13:52:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:32:00.218 ************************************ 00:32:00.218 START TEST dd_invalid_input_count 00:32:00.218 ************************************ 00:32:00.218 13:52:57 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1129 -- # invalid_input_count 00:32:00.218 13:52:57 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:32:00.218 13:52:57 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:32:00.218 13:52:57 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:32:00.218 13:52:57 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:32:00.218 13:52:57 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:32:00.218 13:52:57 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:32:00.218 13:52:57 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:32:00.218 13:52:57 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:32:00.218 13:52:57 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # local es=0 00:32:00.218 13:52:57 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:32:00.218 13:52:57 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:32:00.218 13:52:57 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:32:00.218 13:52:57 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:00.218 13:52:57 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:00.218 13:52:57 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:00.218 13:52:57 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:00.218 13:52:57 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:00.218 13:52:57 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:00.218 13:52:57 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:00.218 13:52:57 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:32:00.218 13:52:57 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:32:00.218 { 00:32:00.218 "subsystems": [ 00:32:00.218 { 00:32:00.218 "subsystem": "bdev", 00:32:00.218 "config": [ 00:32:00.218 { 00:32:00.218 "params": { 00:32:00.218 "block_size": 512, 00:32:00.218 "num_blocks": 512, 00:32:00.218 "name": "malloc0" 00:32:00.218 }, 00:32:00.218 "method": "bdev_malloc_create" 00:32:00.218 }, 00:32:00.218 { 00:32:00.218 "params": { 00:32:00.218 "block_size": 512, 00:32:00.218 "num_blocks": 512, 00:32:00.218 "name": "malloc1" 00:32:00.218 }, 00:32:00.218 "method": "bdev_malloc_create" 00:32:00.218 }, 00:32:00.218 { 00:32:00.218 "method": "bdev_wait_for_examine" 00:32:00.218 } 00:32:00.218 ] 00:32:00.218 } 00:32:00.218 ] 00:32:00.218 } 00:32:00.218 [2024-11-20 13:52:57.363531] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:32:00.218 [2024-11-20 13:52:57.363596] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62177 ] 00:32:00.218 [2024-11-20 13:52:57.494680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:00.477 [2024-11-20 13:52:57.548667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:00.478 [2024-11-20 13:52:57.624466] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:32:00.478 [2024-11-20 13:52:57.701152] spdk_dd.c:1110:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:32:00.478 [2024-11-20 13:52:57.701202] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:00.736 [2024-11-20 13:52:57.888887] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:32:00.736 13:52:57 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # es=228 00:32:00.736 13:52:57 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:00.736 13:52:57 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@664 -- # es=100 00:32:00.736 13:52:57 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@665 -- # case "$es" in 00:32:00.736 13:52:57 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@672 -- # es=1 00:32:00.736 13:52:57 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:00.737 00:32:00.737 real 0m0.654s 00:32:00.737 user 0m0.415s 00:32:00.737 sys 0m0.202s 00:32:00.737 13:52:57 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:00.737 13:52:57 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:32:00.737 ************************************ 00:32:00.737 END TEST dd_invalid_input_count 00:32:00.737 ************************************ 00:32:00.737 13:52:58 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:32:00.737 13:52:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:00.737 13:52:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:00.737 13:52:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:32:00.737 ************************************ 00:32:00.737 START TEST dd_invalid_output_count 00:32:00.737 ************************************ 00:32:00.737 13:52:58 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1129 -- # invalid_output_count 00:32:00.737 13:52:58 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:32:00.737 13:52:58 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:32:00.737 13:52:58 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:32:00.737 13:52:58 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:32:00.737 13:52:58 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:32:00.737 13:52:58 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:32:00.737 13:52:58 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # local es=0 00:32:00.737 13:52:58 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:32:00.737 13:52:58 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:32:00.737 13:52:58 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:00.737 13:52:58 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:00.737 13:52:58 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:00.737 13:52:58 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:00.737 13:52:58 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:00.737 13:52:58 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:00.737 13:52:58 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:00.737 13:52:58 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:32:00.737 13:52:58 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:32:00.996 { 00:32:00.996 "subsystems": [ 00:32:00.996 { 00:32:00.996 "subsystem": "bdev", 00:32:00.996 "config": [ 00:32:00.996 { 00:32:00.996 "params": { 00:32:00.996 "block_size": 512, 00:32:00.996 "num_blocks": 512, 00:32:00.996 "name": "malloc0" 00:32:00.996 }, 00:32:00.996 "method": "bdev_malloc_create" 00:32:00.996 }, 00:32:00.996 { 00:32:00.996 "method": "bdev_wait_for_examine" 00:32:00.996 } 00:32:00.996 ] 00:32:00.996 } 00:32:00.996 ] 00:32:00.996 } 00:32:00.996 [2024-11-20 13:52:58.079682] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:32:00.996 [2024-11-20 13:52:58.079781] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62218 ] 00:32:00.997 [2024-11-20 13:52:58.226524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:00.997 [2024-11-20 13:52:58.277589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:01.256 [2024-11-20 13:52:58.352171] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:32:01.256 [2024-11-20 13:52:58.420645] spdk_dd.c:1152:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:32:01.256 [2024-11-20 13:52:58.420697] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:01.517 [2024-11-20 13:52:58.605284] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:32:01.517 13:52:58 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # es=228 00:32:01.517 13:52:58 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:01.517 13:52:58 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@664 -- # es=100 00:32:01.517 13:52:58 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@665 -- # case "$es" in 00:32:01.517 13:52:58 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@672 -- # es=1 00:32:01.517 13:52:58 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:01.517 00:32:01.517 real 0m0.651s 00:32:01.517 user 0m0.418s 00:32:01.517 sys 0m0.179s 00:32:01.517 13:52:58 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:01.517 13:52:58 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:32:01.517 ************************************ 00:32:01.517 END TEST dd_invalid_output_count 00:32:01.517 ************************************ 00:32:01.517 13:52:58 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:32:01.517 13:52:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:01.517 13:52:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:01.517 13:52:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:32:01.517 ************************************ 00:32:01.517 START TEST dd_bs_not_multiple 00:32:01.517 ************************************ 00:32:01.517 13:52:58 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1129 -- # bs_not_multiple 00:32:01.517 13:52:58 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:32:01.517 13:52:58 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:32:01.517 13:52:58 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:32:01.517 13:52:58 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:32:01.517 13:52:58 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:32:01.517 13:52:58 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:32:01.517 13:52:58 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:32:01.517 13:52:58 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:32:01.517 13:52:58 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # local es=0 00:32:01.517 13:52:58 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:32:01.517 13:52:58 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:32:01.517 13:52:58 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:32:01.517 13:52:58 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:01.517 13:52:58 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:01.517 13:52:58 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:01.517 13:52:58 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:01.517 13:52:58 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:01.517 13:52:58 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:01.517 13:52:58 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:01.517 13:52:58 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:32:01.517 13:52:58 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:32:01.517 [2024-11-20 13:52:58.790414] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:32:01.517 [2024-11-20 13:52:58.790527] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62253 ] 00:32:01.517 { 00:32:01.517 "subsystems": [ 00:32:01.517 { 00:32:01.517 "subsystem": "bdev", 00:32:01.517 "config": [ 00:32:01.517 { 00:32:01.517 "params": { 00:32:01.517 "block_size": 512, 00:32:01.517 "num_blocks": 512, 00:32:01.517 "name": "malloc0" 00:32:01.517 }, 00:32:01.517 "method": "bdev_malloc_create" 00:32:01.517 }, 00:32:01.517 { 00:32:01.517 "params": { 00:32:01.517 "block_size": 512, 00:32:01.517 "num_blocks": 512, 00:32:01.517 "name": "malloc1" 00:32:01.517 }, 00:32:01.517 "method": "bdev_malloc_create" 00:32:01.517 }, 00:32:01.517 { 00:32:01.517 "method": "bdev_wait_for_examine" 00:32:01.517 } 00:32:01.517 ] 00:32:01.517 } 00:32:01.517 ] 00:32:01.517 } 00:32:01.777 [2024-11-20 13:52:58.944240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:01.777 [2024-11-20 13:52:58.992703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:01.777 [2024-11-20 13:52:59.068211] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:32:02.037 [2024-11-20 13:52:59.144276] spdk_dd.c:1168:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:32:02.037 [2024-11-20 13:52:59.144326] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:02.037 [2024-11-20 13:52:59.328741] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:32:02.295 13:52:59 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # es=234 00:32:02.295 13:52:59 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:02.295 13:52:59 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@664 -- # es=106 00:32:02.295 13:52:59 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@665 -- # case "$es" in 00:32:02.295 13:52:59 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@672 -- # es=1 00:32:02.295 13:52:59 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:02.295 00:32:02.295 real 0m0.667s 00:32:02.295 user 0m0.434s 00:32:02.295 sys 0m0.196s 00:32:02.295 13:52:59 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:02.295 13:52:59 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:32:02.295 ************************************ 00:32:02.295 END TEST dd_bs_not_multiple 00:32:02.295 ************************************ 00:32:02.295 00:32:02.295 real 0m7.405s 00:32:02.295 user 0m3.873s 00:32:02.295 sys 0m3.047s 00:32:02.295 13:52:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:02.295 13:52:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:32:02.295 ************************************ 00:32:02.295 END TEST spdk_dd_negative 00:32:02.295 ************************************ 00:32:02.295 ************************************ 00:32:02.295 END TEST spdk_dd 00:32:02.295 ************************************ 00:32:02.295 00:32:02.295 real 1m17.597s 00:32:02.295 user 0m49.268s 00:32:02.295 sys 0m34.529s 00:32:02.295 13:52:59 spdk_dd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:02.295 13:52:59 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:32:02.295 13:52:59 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:32:02.295 13:52:59 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:32:02.295 13:52:59 -- spdk/autotest.sh@260 -- # timing_exit lib 00:32:02.295 13:52:59 -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:02.295 13:52:59 -- common/autotest_common.sh@10 -- # set +x 00:32:02.295 13:52:59 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:32:02.295 13:52:59 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:32:02.295 13:52:59 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:32:02.295 13:52:59 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:32:02.295 13:52:59 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:32:02.295 13:52:59 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:32:02.295 13:52:59 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:32:02.295 13:52:59 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:02.295 13:52:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:02.295 13:52:59 -- common/autotest_common.sh@10 -- # set +x 00:32:02.295 ************************************ 00:32:02.295 START TEST nvmf_tcp 00:32:02.295 ************************************ 00:32:02.295 13:52:59 nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:32:02.554 * Looking for test storage... 00:32:02.554 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:32:02.554 13:52:59 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:02.554 13:52:59 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:32:02.554 13:52:59 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:02.554 13:52:59 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:02.554 13:52:59 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:02.554 13:52:59 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:02.554 13:52:59 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:02.554 13:52:59 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:32:02.554 13:52:59 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:32:02.554 13:52:59 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:32:02.554 13:52:59 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:32:02.554 13:52:59 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:32:02.554 13:52:59 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:32:02.554 13:52:59 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:32:02.554 13:52:59 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:02.554 13:52:59 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:32:02.554 13:52:59 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:32:02.554 13:52:59 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:02.554 13:52:59 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:02.554 13:52:59 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:32:02.554 13:52:59 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:32:02.554 13:52:59 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:02.554 13:52:59 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:32:02.554 13:52:59 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:32:02.554 13:52:59 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:32:02.554 13:52:59 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:32:02.554 13:52:59 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:02.554 13:52:59 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:32:02.554 13:52:59 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:32:02.554 13:52:59 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:02.554 13:52:59 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:02.554 13:52:59 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:32:02.554 13:52:59 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:02.554 13:52:59 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:02.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:02.554 --rc genhtml_branch_coverage=1 00:32:02.554 --rc genhtml_function_coverage=1 00:32:02.554 --rc genhtml_legend=1 00:32:02.554 --rc geninfo_all_blocks=1 00:32:02.554 --rc geninfo_unexecuted_blocks=1 00:32:02.554 00:32:02.554 ' 00:32:02.554 13:52:59 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:02.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:02.555 --rc genhtml_branch_coverage=1 00:32:02.555 --rc genhtml_function_coverage=1 00:32:02.555 --rc genhtml_legend=1 00:32:02.555 --rc geninfo_all_blocks=1 00:32:02.555 --rc geninfo_unexecuted_blocks=1 00:32:02.555 00:32:02.555 ' 00:32:02.555 13:52:59 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:02.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:02.555 --rc genhtml_branch_coverage=1 00:32:02.555 --rc genhtml_function_coverage=1 00:32:02.555 --rc genhtml_legend=1 00:32:02.555 --rc geninfo_all_blocks=1 00:32:02.555 --rc geninfo_unexecuted_blocks=1 00:32:02.555 00:32:02.555 ' 00:32:02.555 13:52:59 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:02.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:02.555 --rc genhtml_branch_coverage=1 00:32:02.555 --rc genhtml_function_coverage=1 00:32:02.555 --rc genhtml_legend=1 00:32:02.555 --rc geninfo_all_blocks=1 00:32:02.555 --rc geninfo_unexecuted_blocks=1 00:32:02.555 00:32:02.555 ' 00:32:02.555 13:52:59 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:32:02.555 13:52:59 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:32:02.555 13:52:59 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:32:02.555 13:52:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:02.555 13:52:59 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:02.555 13:52:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:02.555 ************************************ 00:32:02.555 START TEST nvmf_target_core 00:32:02.555 ************************************ 00:32:02.555 13:52:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:32:02.814 * Looking for test storage... 00:32:02.814 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:32:02.814 13:52:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:02.814 13:52:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:32:02.814 13:52:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:02.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:02.814 --rc genhtml_branch_coverage=1 00:32:02.814 --rc genhtml_function_coverage=1 00:32:02.814 --rc genhtml_legend=1 00:32:02.814 --rc geninfo_all_blocks=1 00:32:02.814 --rc geninfo_unexecuted_blocks=1 00:32:02.814 00:32:02.814 ' 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:02.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:02.814 --rc genhtml_branch_coverage=1 00:32:02.814 --rc genhtml_function_coverage=1 00:32:02.814 --rc genhtml_legend=1 00:32:02.814 --rc geninfo_all_blocks=1 00:32:02.814 --rc geninfo_unexecuted_blocks=1 00:32:02.814 00:32:02.814 ' 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:02.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:02.814 --rc genhtml_branch_coverage=1 00:32:02.814 --rc genhtml_function_coverage=1 00:32:02.814 --rc genhtml_legend=1 00:32:02.814 --rc geninfo_all_blocks=1 00:32:02.814 --rc geninfo_unexecuted_blocks=1 00:32:02.814 00:32:02.814 ' 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:02.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:02.814 --rc genhtml_branch_coverage=1 00:32:02.814 --rc genhtml_function_coverage=1 00:32:02.814 --rc genhtml_legend=1 00:32:02.814 --rc geninfo_all_blocks=1 00:32:02.814 --rc geninfo_unexecuted_blocks=1 00:32:02.814 00:32:02.814 ' 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=105ec898-1662-46bd-85be-b241e399edb9 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:02.814 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:02.814 13:53:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:02.815 13:53:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:32:02.815 13:53:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:32:02.815 13:53:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:32:02.815 13:53:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:32:02.815 13:53:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:02.815 13:53:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:02.815 13:53:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:32:02.815 ************************************ 00:32:02.815 START TEST nvmf_host_management 00:32:02.815 ************************************ 00:32:02.815 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:32:03.075 * Looking for test storage... 00:32:03.076 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:32:03.076 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:03.076 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:03.076 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:32:03.076 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:03.076 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:03.076 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:03.076 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:03.076 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:32:03.076 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:32:03.076 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:32:03.076 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:32:03.076 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:32:03.076 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:32:03.076 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:32:03.076 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:03.076 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:32:03.076 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:32:03.076 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:03.076 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:03.076 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:32:03.076 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:32:03.076 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:03.076 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:32:03.076 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:32:03.076 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:32:03.076 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:32:03.076 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:03.076 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:32:03.076 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:32:03.076 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:03.076 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:03.076 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:32:03.076 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:03.076 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:03.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:03.076 --rc genhtml_branch_coverage=1 00:32:03.076 --rc genhtml_function_coverage=1 00:32:03.076 --rc genhtml_legend=1 00:32:03.076 --rc geninfo_all_blocks=1 00:32:03.076 --rc geninfo_unexecuted_blocks=1 00:32:03.076 00:32:03.076 ' 00:32:03.076 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:03.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:03.076 --rc genhtml_branch_coverage=1 00:32:03.076 --rc genhtml_function_coverage=1 00:32:03.076 --rc genhtml_legend=1 00:32:03.076 --rc geninfo_all_blocks=1 00:32:03.076 --rc geninfo_unexecuted_blocks=1 00:32:03.076 00:32:03.076 ' 00:32:03.076 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:03.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:03.076 --rc genhtml_branch_coverage=1 00:32:03.076 --rc genhtml_function_coverage=1 00:32:03.076 --rc genhtml_legend=1 00:32:03.076 --rc geninfo_all_blocks=1 00:32:03.076 --rc geninfo_unexecuted_blocks=1 00:32:03.076 00:32:03.076 ' 00:32:03.076 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:03.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:03.076 --rc genhtml_branch_coverage=1 00:32:03.076 --rc genhtml_function_coverage=1 00:32:03.076 --rc genhtml_legend=1 00:32:03.076 --rc geninfo_all_blocks=1 00:32:03.076 --rc geninfo_unexecuted_blocks=1 00:32:03.076 00:32:03.076 ' 00:32:03.076 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:03.076 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:32:03.076 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:03.076 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:03.076 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:03.076 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:03.076 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:03.076 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:03.076 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:03.076 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:03.076 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:03.076 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:03.076 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:32:03.076 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=105ec898-1662-46bd-85be-b241e399edb9 00:32:03.076 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:03.076 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:03.076 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:32:03.076 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:03.076 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:03.076 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:32:03.076 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:03.076 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:03.076 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:03.077 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:03.077 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:03.077 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:03.077 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:32:03.077 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:03.077 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:32:03.077 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:03.077 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:03.077 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:03.077 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:03.077 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:03.077 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:03.077 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:03.077 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:03.077 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:03.077 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:03.077 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:03.077 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:03.077 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:32:03.077 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:03.077 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:03.077 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:03.077 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:03.077 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:03.077 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:03.077 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:03.077 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:03.077 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:32:03.077 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:32:03.077 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:32:03.077 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:32:03.077 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:32:03.077 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:32:03.077 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:03.077 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:32:03.077 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:32:03.077 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:32:03.077 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:03.077 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:32:03.077 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:32:03.077 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:32:03.077 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:32:03.077 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:32:03.077 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:32:03.077 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:03.077 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:32:03.077 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:32:03.077 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:32:03.077 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:32:03.077 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:32:03.337 Cannot find device "nvmf_init_br" 00:32:03.337 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:32:03.337 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:32:03.337 Cannot find device "nvmf_init_br2" 00:32:03.337 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:32:03.337 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:32:03.337 Cannot find device "nvmf_tgt_br" 00:32:03.337 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:32:03.337 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:32:03.337 Cannot find device "nvmf_tgt_br2" 00:32:03.337 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:32:03.337 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:32:03.337 Cannot find device "nvmf_init_br" 00:32:03.337 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:32:03.337 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:32:03.337 Cannot find device "nvmf_init_br2" 00:32:03.337 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:32:03.337 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:32:03.337 Cannot find device "nvmf_tgt_br" 00:32:03.337 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:32:03.337 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:32:03.337 Cannot find device "nvmf_tgt_br2" 00:32:03.337 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:32:03.337 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:32:03.337 Cannot find device "nvmf_br" 00:32:03.337 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:32:03.337 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:32:03.337 Cannot find device "nvmf_init_if" 00:32:03.337 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:32:03.337 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:32:03.337 Cannot find device "nvmf_init_if2" 00:32:03.338 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:32:03.338 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:03.338 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:03.338 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:32:03.338 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:03.338 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:03.338 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:32:03.338 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:32:03.338 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:32:03.338 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:32:03.338 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:32:03.597 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:32:03.597 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:32:03.597 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:32:03.597 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:32:03.597 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:32:03.597 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:32:03.597 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:32:03.597 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:32:03.597 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:32:03.597 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:32:03.597 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:32:03.597 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:32:03.597 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:32:03.597 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:32:03.597 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:32:03.597 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:32:03.597 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:32:03.597 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:32:03.597 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:32:03.597 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:32:03.597 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:32:03.597 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:32:03.857 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:32:03.857 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:32:03.857 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:32:03.857 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:32:03.857 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:32:03.857 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:32:03.857 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:32:03.857 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:32:03.857 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.175 ms 00:32:03.857 00:32:03.857 --- 10.0.0.3 ping statistics --- 00:32:03.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:03.857 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:32:03.857 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:32:03.857 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:32:03.857 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.125 ms 00:32:03.857 00:32:03.857 --- 10.0.0.4 ping statistics --- 00:32:03.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:03.857 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:32:03.857 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:32:03.857 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:03.857 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.061 ms 00:32:03.857 00:32:03.857 --- 10.0.0.1 ping statistics --- 00:32:03.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:03.857 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:32:03.857 13:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:32:03.857 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:03.857 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:32:03.857 00:32:03.857 --- 10.0.0.2 ping statistics --- 00:32:03.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:03.857 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:32:03.857 13:53:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:03.857 13:53:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:32:03.857 13:53:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:03.857 13:53:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:03.857 13:53:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:03.857 13:53:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:03.857 13:53:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:03.857 13:53:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:03.857 13:53:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:03.857 13:53:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:32:03.857 13:53:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:32:03.857 13:53:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:32:03.857 13:53:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:03.857 13:53:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:03.857 13:53:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:03.857 13:53:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=62594 00:32:03.857 13:53:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:32:03.858 13:53:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 62594 00:32:03.858 13:53:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 62594 ']' 00:32:03.858 13:53:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:03.858 13:53:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:03.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:03.858 13:53:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:03.858 13:53:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:03.858 13:53:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:03.858 [2024-11-20 13:53:01.126013] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:32:03.858 [2024-11-20 13:53:01.126096] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:04.117 [2024-11-20 13:53:01.278672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:04.118 [2024-11-20 13:53:01.369213] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:04.118 [2024-11-20 13:53:01.369292] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:04.118 [2024-11-20 13:53:01.369300] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:04.118 [2024-11-20 13:53:01.369306] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:04.118 [2024-11-20 13:53:01.369311] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:04.118 [2024-11-20 13:53:01.370455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:04.118 [2024-11-20 13:53:01.370536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:04.118 [2024-11-20 13:53:01.370683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:04.118 [2024-11-20 13:53:01.370689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:04.118 [2024-11-20 13:53:01.416175] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:32:05.075 13:53:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:05.075 13:53:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:32:05.075 13:53:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:05.075 13:53:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:05.075 13:53:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:05.075 13:53:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:05.075 13:53:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:05.075 13:53:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.075 13:53:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:05.075 [2024-11-20 13:53:02.212943] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:05.075 13:53:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.075 13:53:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:32:05.075 13:53:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:05.075 13:53:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:05.075 13:53:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:32:05.075 13:53:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:32:05.075 13:53:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:32:05.075 13:53:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.075 13:53:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:05.075 Malloc0 00:32:05.075 [2024-11-20 13:53:02.305265] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:32:05.075 13:53:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.075 13:53:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:32:05.075 13:53:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:05.075 13:53:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:05.075 13:53:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=62648 00:32:05.075 13:53:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 62648 /var/tmp/bdevperf.sock 00:32:05.075 13:53:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 62648 ']' 00:32:05.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:05.075 13:53:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:05.075 13:53:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:32:05.075 13:53:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:32:05.075 13:53:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:05.075 13:53:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:05.075 13:53:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:32:05.075 13:53:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:05.075 13:53:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:32:05.075 13:53:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:05.075 13:53:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:05.075 13:53:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:05.075 { 00:32:05.075 "params": { 00:32:05.075 "name": "Nvme$subsystem", 00:32:05.075 "trtype": "$TEST_TRANSPORT", 00:32:05.075 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:05.075 "adrfam": "ipv4", 00:32:05.075 "trsvcid": "$NVMF_PORT", 00:32:05.075 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:05.075 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:05.075 "hdgst": ${hdgst:-false}, 00:32:05.075 "ddgst": ${ddgst:-false} 00:32:05.075 }, 00:32:05.075 "method": "bdev_nvme_attach_controller" 00:32:05.075 } 00:32:05.075 EOF 00:32:05.075 )") 00:32:05.075 13:53:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:32:05.075 13:53:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:32:05.075 13:53:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:32:05.075 13:53:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:05.075 "params": { 00:32:05.075 "name": "Nvme0", 00:32:05.075 "trtype": "tcp", 00:32:05.075 "traddr": "10.0.0.3", 00:32:05.075 "adrfam": "ipv4", 00:32:05.075 "trsvcid": "4420", 00:32:05.075 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:05.075 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:05.075 "hdgst": false, 00:32:05.075 "ddgst": false 00:32:05.075 }, 00:32:05.075 "method": "bdev_nvme_attach_controller" 00:32:05.075 }' 00:32:05.334 [2024-11-20 13:53:02.432759] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:32:05.334 [2024-11-20 13:53:02.432837] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62648 ] 00:32:05.334 [2024-11-20 13:53:02.587307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:05.593 [2024-11-20 13:53:02.672756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:05.594 [2024-11-20 13:53:02.764777] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:32:05.594 Running I/O for 10 seconds... 00:32:06.163 13:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:06.163 13:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:32:06.163 13:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:32:06.163 13:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:06.163 13:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:06.163 13:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:06.163 13:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:06.163 13:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:32:06.163 13:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:32:06.163 13:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:32:06.163 13:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:32:06.163 13:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:32:06.163 13:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:32:06.163 13:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:32:06.163 13:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:32:06.163 13:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:32:06.163 13:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:06.163 13:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:06.163 13:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:06.163 13:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:32:06.163 13:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:32:06.163 13:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:32:06.163 13:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:32:06.163 13:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:32:06.163 13:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:32:06.163 13:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:06.163 13:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:06.163 [2024-11-20 13:53:03.478528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:06.163 [2024-11-20 13:53:03.478580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.163 [2024-11-20 13:53:03.478590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:06.163 [2024-11-20 13:53:03.478598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.163 [2024-11-20 13:53:03.478606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:06.163 [2024-11-20 13:53:03.478613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.163 [2024-11-20 13:53:03.478620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:06.163 [2024-11-20 13:53:03.478627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.163 [2024-11-20 13:53:03.478634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacce0 is same with the state(6) to be set 00:32:06.163 [2024-11-20 13:53:03.478993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.163 [2024-11-20 13:53:03.479018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.163 [2024-11-20 13:53:03.479039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.163 [2024-11-20 13:53:03.479047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.163 [2024-11-20 13:53:03.479059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.164 [2024-11-20 13:53:03.479067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.164 [2024-11-20 13:53:03.479079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.164 [2024-11-20 13:53:03.479087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.164 [2024-11-20 13:53:03.479097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.164 [2024-11-20 13:53:03.479105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.164 [2024-11-20 13:53:03.479116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.164 [2024-11-20 13:53:03.479124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.164 [2024-11-20 13:53:03.479135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.164 [2024-11-20 13:53:03.479143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.164 [2024-11-20 13:53:03.479153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.164 [2024-11-20 13:53:03.479161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.164 [2024-11-20 13:53:03.479171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.164 [2024-11-20 13:53:03.479179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.164 [2024-11-20 13:53:03.479190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.164 [2024-11-20 13:53:03.479197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.164 [2024-11-20 13:53:03.479207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.164 [2024-11-20 13:53:03.479222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.164 [2024-11-20 13:53:03.479232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.164 [2024-11-20 13:53:03.479240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.164 [2024-11-20 13:53:03.479251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.164 [2024-11-20 13:53:03.479260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.164 [2024-11-20 13:53:03.479270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.164 [2024-11-20 13:53:03.479278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.164 [2024-11-20 13:53:03.479288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.164 [2024-11-20 13:53:03.479297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.164 [2024-11-20 13:53:03.479307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.164 [2024-11-20 13:53:03.479317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.164 [2024-11-20 13:53:03.479329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.164 [2024-11-20 13:53:03.479339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.164 [2024-11-20 13:53:03.479351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.164 [2024-11-20 13:53:03.479358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.164 [2024-11-20 13:53:03.479366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.164 [2024-11-20 13:53:03.479373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.164 [2024-11-20 13:53:03.479382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.164 [2024-11-20 13:53:03.479389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.164 [2024-11-20 13:53:03.479397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.164 [2024-11-20 13:53:03.479403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.164 [2024-11-20 13:53:03.479412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.164 [2024-11-20 13:53:03.479419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.164 [2024-11-20 13:53:03.479427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.164 [2024-11-20 13:53:03.479434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.164 [2024-11-20 13:53:03.479443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.164 [2024-11-20 13:53:03.479449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.164 [2024-11-20 13:53:03.479457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.164 [2024-11-20 13:53:03.479464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.164 [2024-11-20 13:53:03.479472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.164 [2024-11-20 13:53:03.479478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.164 [2024-11-20 13:53:03.479486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.164 [2024-11-20 13:53:03.479495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.164 [2024-11-20 13:53:03.479503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.164 [2024-11-20 13:53:03.479509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.164 [2024-11-20 13:53:03.479517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.164 [2024-11-20 13:53:03.479525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.164 [2024-11-20 13:53:03.479534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.164 [2024-11-20 13:53:03.479541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.164 [2024-11-20 13:53:03.479549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.164 [2024-11-20 13:53:03.479556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.164 [2024-11-20 13:53:03.479564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.164 [2024-11-20 13:53:03.479570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.164 [2024-11-20 13:53:03.479578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.164 [2024-11-20 13:53:03.479585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.164 [2024-11-20 13:53:03.479593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.164 [2024-11-20 13:53:03.479599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.164 [2024-11-20 13:53:03.479607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.164 [2024-11-20 13:53:03.479614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.164 [2024-11-20 13:53:03.479622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.164 [2024-11-20 13:53:03.479628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.164 [2024-11-20 13:53:03.479637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.164 [2024-11-20 13:53:03.479645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.164 [2024-11-20 13:53:03.479654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.164 [2024-11-20 13:53:03.479660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.164 [2024-11-20 13:53:03.479668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.164 [2024-11-20 13:53:03.479674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.164 [2024-11-20 13:53:03.479683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.164 [2024-11-20 13:53:03.479690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.164 [2024-11-20 13:53:03.479698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.164 [2024-11-20 13:53:03.479734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.164 [2024-11-20 13:53:03.479745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.164 [2024-11-20 13:53:03.479751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.165 [2024-11-20 13:53:03.479760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.165 [2024-11-20 13:53:03.479769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.165 [2024-11-20 13:53:03.479777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.165 [2024-11-20 13:53:03.479783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.165 [2024-11-20 13:53:03.479792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.165 [2024-11-20 13:53:03.479799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.165 [2024-11-20 13:53:03.479807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.165 [2024-11-20 13:53:03.479813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.165 [2024-11-20 13:53:03.479822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.165 [2024-11-20 13:53:03.479829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.165 [2024-11-20 13:53:03.479837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.165 [2024-11-20 13:53:03.479843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.165 [2024-11-20 13:53:03.479851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.165 [2024-11-20 13:53:03.479858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.165 [2024-11-20 13:53:03.479866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.165 [2024-11-20 13:53:03.479872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.165 [2024-11-20 13:53:03.479880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.165 [2024-11-20 13:53:03.479887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.165 [2024-11-20 13:53:03.479895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.165 [2024-11-20 13:53:03.479901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.165 [2024-11-20 13:53:03.479910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.165 [2024-11-20 13:53:03.479918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.165 [2024-11-20 13:53:03.479927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.165 [2024-11-20 13:53:03.479933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.165 [2024-11-20 13:53:03.479941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.165 [2024-11-20 13:53:03.479948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.165 [2024-11-20 13:53:03.479956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.165 [2024-11-20 13:53:03.479962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.165 [2024-11-20 13:53:03.479970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.165 [2024-11-20 13:53:03.479977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.165 [2024-11-20 13:53:03.479985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.165 [2024-11-20 13:53:03.479991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.165 [2024-11-20 13:53:03.479999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.165 [2024-11-20 13:53:03.480008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.165 [2024-11-20 13:53:03.480016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.165 [2024-11-20 13:53:03.480022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.165 [2024-11-20 13:53:03.480031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.165 [2024-11-20 13:53:03.480037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.165 [2024-11-20 13:53:03.480045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.165 [2024-11-20 13:53:03.480052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.165 [2024-11-20 13:53:03.480060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.165 [2024-11-20 13:53:03.480066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.165 [2024-11-20 13:53:03.480075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.165 [2024-11-20 13:53:03.480081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.165 [2024-11-20 13:53:03.480088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea7130 is same with the state(6) to be set 00:32:06.165 task offset: 106368 on job bdev=Nvme0n1 fails 00:32:06.165 00:32:06.165 Latency(us) 00:32:06.165 [2024-11-20T13:53:03.488Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:06.165 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:06.165 Job: Nvme0n1 ended in about 0.59 seconds with error 00:32:06.165 Verification LBA range: start 0x0 length 0x400 00:32:06.165 Nvme0n1 : 0.59 1310.75 81.92 109.23 0.00 44016.02 6753.93 42813.04 00:32:06.165 [2024-11-20T13:53:03.488Z] =================================================================================================================== 00:32:06.165 [2024-11-20T13:53:03.488Z] Total : 1310.75 81.92 109.23 0.00 44016.02 6753.93 42813.04 00:32:06.165 [2024-11-20 13:53:03.481349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:06.165 13:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:06.165 13:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:32:06.165 13:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:06.165 13:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:06.424 [2024-11-20 13:53:03.483476] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:06.424 [2024-11-20 13:53:03.483508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eacce0 (9): Bad file descriptor 00:32:06.424 [2024-11-20 13:53:03.491439] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:32:06.424 13:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:06.424 13:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:32:07.374 13:53:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 62648 00:32:07.374 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (62648) - No such process 00:32:07.374 13:53:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:32:07.374 13:53:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:32:07.374 13:53:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:32:07.374 13:53:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:32:07.374 13:53:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:32:07.374 13:53:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:32:07.374 13:53:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:07.374 13:53:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:07.374 { 00:32:07.374 "params": { 00:32:07.374 "name": "Nvme$subsystem", 00:32:07.374 "trtype": "$TEST_TRANSPORT", 00:32:07.374 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:07.374 "adrfam": "ipv4", 00:32:07.374 "trsvcid": "$NVMF_PORT", 00:32:07.374 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:07.374 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:07.374 "hdgst": ${hdgst:-false}, 00:32:07.374 "ddgst": ${ddgst:-false} 00:32:07.374 }, 00:32:07.374 "method": "bdev_nvme_attach_controller" 00:32:07.374 } 00:32:07.374 EOF 00:32:07.374 )") 00:32:07.374 13:53:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:32:07.374 13:53:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:32:07.374 13:53:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:32:07.374 13:53:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:07.374 "params": { 00:32:07.374 "name": "Nvme0", 00:32:07.374 "trtype": "tcp", 00:32:07.374 "traddr": "10.0.0.3", 00:32:07.374 "adrfam": "ipv4", 00:32:07.374 "trsvcid": "4420", 00:32:07.374 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:07.374 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:07.374 "hdgst": false, 00:32:07.374 "ddgst": false 00:32:07.374 }, 00:32:07.374 "method": "bdev_nvme_attach_controller" 00:32:07.374 }' 00:32:07.374 [2024-11-20 13:53:04.561679] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:32:07.374 [2024-11-20 13:53:04.561761] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62686 ] 00:32:07.634 [2024-11-20 13:53:04.708854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:07.634 [2024-11-20 13:53:04.762598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:07.634 [2024-11-20 13:53:04.823173] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:32:07.634 Running I/O for 1 seconds... 00:32:09.013 1600.00 IOPS, 100.00 MiB/s 00:32:09.014 Latency(us) 00:32:09.014 [2024-11-20T13:53:06.337Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:09.014 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:09.014 Verification LBA range: start 0x0 length 0x400 00:32:09.014 Nvme0n1 : 1.00 1655.75 103.48 0.00 0.00 38037.55 5151.30 37089.37 00:32:09.014 [2024-11-20T13:53:06.337Z] =================================================================================================================== 00:32:09.014 [2024-11-20T13:53:06.337Z] Total : 1655.75 103.48 0.00 0.00 38037.55 5151.30 37089.37 00:32:09.014 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:32:09.014 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:32:09.014 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:32:09.014 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:32:09.014 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:32:09.014 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:09.014 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:32:09.014 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:09.014 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:32:09.014 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:09.014 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:09.014 rmmod nvme_tcp 00:32:09.014 rmmod nvme_fabrics 00:32:09.014 rmmod nvme_keyring 00:32:09.014 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:09.014 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:32:09.014 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:32:09.014 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 62594 ']' 00:32:09.014 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 62594 00:32:09.014 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 62594 ']' 00:32:09.014 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 62594 00:32:09.014 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:32:09.014 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:09.014 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62594 00:32:09.274 killing process with pid 62594 00:32:09.274 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:09.274 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:09.274 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62594' 00:32:09.274 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 62594 00:32:09.274 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 62594 00:32:09.274 [2024-11-20 13:53:06.553762] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:32:09.274 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:09.274 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:09.274 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:09.274 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:32:09.274 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:09.274 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:32:09.274 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:32:09.533 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:09.533 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:32:09.533 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:32:09.533 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:32:09.533 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:32:09.533 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:32:09.533 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:32:09.533 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:32:09.533 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:32:09.533 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:32:09.533 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:32:09.533 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:32:09.533 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:32:09.533 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:09.533 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:09.533 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:32:09.533 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:09.533 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:09.533 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:09.533 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:32:09.533 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:32:09.533 00:32:09.533 real 0m6.724s 00:32:09.533 user 0m23.768s 00:32:09.533 sys 0m1.870s 00:32:09.533 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:09.533 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:09.533 ************************************ 00:32:09.533 END TEST nvmf_host_management 00:32:09.533 ************************************ 00:32:09.792 13:53:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:32:09.793 13:53:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:09.793 13:53:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:09.793 13:53:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:32:09.793 ************************************ 00:32:09.793 START TEST nvmf_lvol 00:32:09.793 ************************************ 00:32:09.793 13:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:32:09.793 * Looking for test storage... 00:32:09.793 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:32:09.793 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:09.793 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:32:09.793 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:09.793 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:09.793 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:09.793 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:09.793 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:09.793 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:32:09.793 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:32:09.793 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:32:09.793 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:32:09.793 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:32:09.793 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:32:09.793 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:32:09.793 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:09.793 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:32:09.793 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:32:09.793 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:09.793 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:09.793 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:32:09.793 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:32:09.793 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:09.793 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:32:09.793 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:32:09.793 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:32:09.793 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:32:09.793 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:09.793 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:32:09.793 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:32:09.793 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:09.793 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:09.793 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:32:09.793 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:09.793 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:09.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:09.793 --rc genhtml_branch_coverage=1 00:32:09.793 --rc genhtml_function_coverage=1 00:32:09.793 --rc genhtml_legend=1 00:32:09.793 --rc geninfo_all_blocks=1 00:32:09.793 --rc geninfo_unexecuted_blocks=1 00:32:09.793 00:32:09.793 ' 00:32:09.793 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:09.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:09.793 --rc genhtml_branch_coverage=1 00:32:09.793 --rc genhtml_function_coverage=1 00:32:09.793 --rc genhtml_legend=1 00:32:09.793 --rc geninfo_all_blocks=1 00:32:09.793 --rc geninfo_unexecuted_blocks=1 00:32:09.793 00:32:09.793 ' 00:32:09.793 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:09.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:09.793 --rc genhtml_branch_coverage=1 00:32:09.793 --rc genhtml_function_coverage=1 00:32:09.793 --rc genhtml_legend=1 00:32:09.793 --rc geninfo_all_blocks=1 00:32:09.793 --rc geninfo_unexecuted_blocks=1 00:32:09.793 00:32:09.793 ' 00:32:09.793 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:09.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:09.793 --rc genhtml_branch_coverage=1 00:32:09.793 --rc genhtml_function_coverage=1 00:32:09.793 --rc genhtml_legend=1 00:32:09.793 --rc geninfo_all_blocks=1 00:32:09.793 --rc geninfo_unexecuted_blocks=1 00:32:09.793 00:32:09.793 ' 00:32:09.793 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:10.053 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:32:10.053 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:10.053 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:10.053 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:10.053 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:10.053 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:10.053 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:10.053 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:10.053 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:10.053 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:10.053 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:10.053 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:32:10.053 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=105ec898-1662-46bd-85be-b241e399edb9 00:32:10.053 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:10.053 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:10.053 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:32:10.053 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:10.053 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:10.053 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:32:10.053 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:10.053 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:10.053 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:10.053 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:10.053 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:10.053 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:10.053 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:32:10.053 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:10.053 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:32:10.053 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:10.053 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:10.053 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:10.053 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:10.053 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:10.053 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:10.053 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:10.053 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:10.053 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:10.053 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:10.053 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:10.053 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:10.053 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:32:10.053 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:32:10.053 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:10.053 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:32:10.053 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:10.054 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:10.054 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:10.054 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:10.054 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:10.054 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:10.054 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:10.054 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:10.054 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:32:10.054 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:32:10.054 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:32:10.054 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:32:10.054 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:32:10.054 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:32:10.054 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:10.054 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:32:10.054 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:32:10.054 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:32:10.054 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:10.054 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:32:10.054 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:32:10.054 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:32:10.054 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:32:10.054 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:32:10.054 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:32:10.054 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:10.054 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:32:10.054 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:32:10.054 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:32:10.054 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:32:10.054 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:32:10.054 Cannot find device "nvmf_init_br" 00:32:10.054 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:32:10.054 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:32:10.054 Cannot find device "nvmf_init_br2" 00:32:10.054 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:32:10.054 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:32:10.054 Cannot find device "nvmf_tgt_br" 00:32:10.054 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:32:10.054 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:32:10.054 Cannot find device "nvmf_tgt_br2" 00:32:10.054 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:32:10.054 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:32:10.054 Cannot find device "nvmf_init_br" 00:32:10.054 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:32:10.054 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:32:10.054 Cannot find device "nvmf_init_br2" 00:32:10.054 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:32:10.054 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:32:10.054 Cannot find device "nvmf_tgt_br" 00:32:10.054 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:32:10.054 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:32:10.054 Cannot find device "nvmf_tgt_br2" 00:32:10.054 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:32:10.054 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:32:10.054 Cannot find device "nvmf_br" 00:32:10.054 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:32:10.054 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:32:10.054 Cannot find device "nvmf_init_if" 00:32:10.054 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:32:10.054 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:32:10.054 Cannot find device "nvmf_init_if2" 00:32:10.054 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:32:10.054 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:10.054 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:10.054 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:32:10.054 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:10.054 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:10.054 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:32:10.054 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:32:10.054 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:32:10.054 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:32:10.054 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:32:10.313 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:32:10.313 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:32:10.313 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:32:10.313 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:32:10.313 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:32:10.313 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:32:10.313 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:32:10.313 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:32:10.313 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:32:10.313 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:32:10.313 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:32:10.313 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:32:10.313 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:32:10.313 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:32:10.313 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:32:10.313 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:32:10.313 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:32:10.313 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:32:10.313 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:32:10.313 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:32:10.313 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:32:10.313 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:32:10.313 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:32:10.313 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:32:10.313 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:32:10.313 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:32:10.313 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:32:10.314 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:32:10.314 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:32:10.314 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:32:10.314 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.095 ms 00:32:10.314 00:32:10.314 --- 10.0.0.3 ping statistics --- 00:32:10.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:10.314 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:32:10.314 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:32:10.314 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:32:10.314 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.066 ms 00:32:10.314 00:32:10.314 --- 10.0.0.4 ping statistics --- 00:32:10.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:10.314 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:32:10.314 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:32:10.314 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:10.314 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:32:10.314 00:32:10.314 --- 10.0.0.1 ping statistics --- 00:32:10.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:10.314 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:32:10.314 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:32:10.314 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:10.314 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:32:10.314 00:32:10.314 --- 10.0.0.2 ping statistics --- 00:32:10.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:10.314 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:32:10.314 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:10.314 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:32:10.314 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:10.314 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:10.314 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:10.314 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:10.314 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:10.314 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:10.314 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:10.314 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:32:10.314 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:10.314 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:10.314 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:10.314 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=62956 00:32:10.314 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:32:10.314 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 62956 00:32:10.314 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 62956 ']' 00:32:10.314 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:10.314 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:10.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:10.314 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:10.314 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:10.314 13:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:10.573 [2024-11-20 13:53:07.675060] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:32:10.573 [2024-11-20 13:53:07.675143] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:10.573 [2024-11-20 13:53:07.826053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:10.832 [2024-11-20 13:53:07.912185] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:10.832 [2024-11-20 13:53:07.912241] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:10.832 [2024-11-20 13:53:07.912250] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:10.832 [2024-11-20 13:53:07.912256] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:10.832 [2024-11-20 13:53:07.912261] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:10.832 [2024-11-20 13:53:07.913645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:10.832 [2024-11-20 13:53:07.913733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:10.832 [2024-11-20 13:53:07.913750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:10.832 [2024-11-20 13:53:07.959399] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:32:11.400 13:53:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:11.400 13:53:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:32:11.400 13:53:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:11.401 13:53:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:11.401 13:53:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:11.401 13:53:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:11.401 13:53:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:11.660 [2024-11-20 13:53:08.888308] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:11.660 13:53:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:11.919 13:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:32:11.919 13:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:12.177 13:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:32:12.177 13:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:32:12.436 13:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:32:13.006 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=453c4135-5e3b-44df-a0ab-dc0b95715f1f 00:32:13.006 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 453c4135-5e3b-44df-a0ab-dc0b95715f1f lvol 20 00:32:13.006 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=960851a9-0d89-45bd-81f2-6f0386154a1b 00:32:13.006 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:13.266 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 960851a9-0d89-45bd-81f2-6f0386154a1b 00:32:13.526 13:53:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:32:13.785 [2024-11-20 13:53:11.051446] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:32:13.785 13:53:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:32:14.048 13:53:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=63036 00:32:14.048 13:53:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:32:14.048 13:53:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:32:14.989 13:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 960851a9-0d89-45bd-81f2-6f0386154a1b MY_SNAPSHOT 00:32:15.556 13:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=b063e6c5-eb8c-4943-96fa-0d88c2de342e 00:32:15.556 13:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 960851a9-0d89-45bd-81f2-6f0386154a1b 30 00:32:15.815 13:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone b063e6c5-eb8c-4943-96fa-0d88c2de342e MY_CLONE 00:32:16.073 13:53:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=c85810d8-b0b6-41ef-a0ed-23951341f0e0 00:32:16.073 13:53:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate c85810d8-b0b6-41ef-a0ed-23951341f0e0 00:32:16.640 13:53:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 63036 00:32:24.765 Initializing NVMe Controllers 00:32:24.765 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:32:24.765 Controller IO queue size 128, less than required. 00:32:24.765 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:24.765 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:32:24.765 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:32:24.765 Initialization complete. Launching workers. 00:32:24.765 ======================================================== 00:32:24.765 Latency(us) 00:32:24.765 Device Information : IOPS MiB/s Average min max 00:32:24.765 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 7947.20 31.04 16117.30 2458.57 85508.96 00:32:24.765 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 4519.50 17.65 28322.65 5733.40 146135.24 00:32:24.765 ======================================================== 00:32:24.765 Total : 12466.70 48.70 20542.06 2458.57 146135.24 00:32:24.765 00:32:24.766 13:53:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:24.766 13:53:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 960851a9-0d89-45bd-81f2-6f0386154a1b 00:32:25.025 13:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 453c4135-5e3b-44df-a0ab-dc0b95715f1f 00:32:25.025 13:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:32:25.025 13:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:32:25.025 13:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:32:25.025 13:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:25.025 13:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:32:25.284 13:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:25.284 13:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:32:25.284 13:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:25.284 13:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:25.284 rmmod nvme_tcp 00:32:25.284 rmmod nvme_fabrics 00:32:25.284 rmmod nvme_keyring 00:32:25.284 13:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:25.284 13:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:32:25.284 13:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:32:25.284 13:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 62956 ']' 00:32:25.284 13:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 62956 00:32:25.284 13:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 62956 ']' 00:32:25.284 13:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 62956 00:32:25.284 13:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:32:25.284 13:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:25.284 13:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62956 00:32:25.284 13:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:25.284 13:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:25.284 killing process with pid 62956 00:32:25.284 13:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62956' 00:32:25.284 13:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 62956 00:32:25.284 13:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 62956 00:32:25.544 13:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:25.544 13:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:25.544 13:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:25.544 13:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:32:25.544 13:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:32:25.544 13:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:32:25.544 13:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:25.544 13:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:25.544 13:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:32:25.544 13:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:32:25.544 13:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:32:25.544 13:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:32:25.544 13:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:32:25.544 13:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:32:25.544 13:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:32:25.544 13:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:32:25.544 13:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:32:25.544 13:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:32:25.804 13:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:32:25.804 13:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:32:25.804 13:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:25.804 13:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:25.804 13:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:32:25.804 13:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:25.804 13:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:25.804 13:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:25.804 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:32:25.804 00:32:25.804 real 0m16.127s 00:32:25.804 user 1m5.883s 00:32:25.804 sys 0m3.871s 00:32:25.804 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:25.804 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:25.804 ************************************ 00:32:25.804 END TEST nvmf_lvol 00:32:25.804 ************************************ 00:32:25.804 13:53:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:32:25.804 13:53:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:25.804 13:53:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:25.804 13:53:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:32:25.804 ************************************ 00:32:25.804 START TEST nvmf_lvs_grow 00:32:25.804 ************************************ 00:32:25.804 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:32:26.064 * Looking for test storage... 00:32:26.064 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:32:26.064 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:26.064 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:32:26.064 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:26.064 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:26.064 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:26.064 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:26.064 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:26.064 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:32:26.064 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:32:26.064 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:32:26.064 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:32:26.064 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:32:26.064 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:32:26.064 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:32:26.064 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:26.064 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:32:26.064 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:32:26.064 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:26.064 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:26.064 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:32:26.064 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:32:26.064 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:26.064 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:32:26.064 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:32:26.064 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:32:26.064 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:32:26.064 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:26.064 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:32:26.064 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:32:26.064 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:26.064 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:26.064 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:32:26.064 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:26.064 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:26.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:26.064 --rc genhtml_branch_coverage=1 00:32:26.064 --rc genhtml_function_coverage=1 00:32:26.064 --rc genhtml_legend=1 00:32:26.064 --rc geninfo_all_blocks=1 00:32:26.064 --rc geninfo_unexecuted_blocks=1 00:32:26.064 00:32:26.064 ' 00:32:26.064 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:26.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:26.064 --rc genhtml_branch_coverage=1 00:32:26.064 --rc genhtml_function_coverage=1 00:32:26.064 --rc genhtml_legend=1 00:32:26.064 --rc geninfo_all_blocks=1 00:32:26.064 --rc geninfo_unexecuted_blocks=1 00:32:26.064 00:32:26.064 ' 00:32:26.064 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:26.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:26.064 --rc genhtml_branch_coverage=1 00:32:26.064 --rc genhtml_function_coverage=1 00:32:26.064 --rc genhtml_legend=1 00:32:26.064 --rc geninfo_all_blocks=1 00:32:26.064 --rc geninfo_unexecuted_blocks=1 00:32:26.064 00:32:26.064 ' 00:32:26.064 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:26.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:26.064 --rc genhtml_branch_coverage=1 00:32:26.064 --rc genhtml_function_coverage=1 00:32:26.064 --rc genhtml_legend=1 00:32:26.064 --rc geninfo_all_blocks=1 00:32:26.064 --rc geninfo_unexecuted_blocks=1 00:32:26.064 00:32:26.064 ' 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=105ec898-1662-46bd-85be-b241e399edb9 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:26.065 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:32:26.065 Cannot find device "nvmf_init_br" 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:32:26.065 Cannot find device "nvmf_init_br2" 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:32:26.065 Cannot find device "nvmf_tgt_br" 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:32:26.065 Cannot find device "nvmf_tgt_br2" 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:32:26.065 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:32:26.325 Cannot find device "nvmf_init_br" 00:32:26.325 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:32:26.325 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:32:26.325 Cannot find device "nvmf_init_br2" 00:32:26.325 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:32:26.325 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:32:26.325 Cannot find device "nvmf_tgt_br" 00:32:26.325 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:32:26.325 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:32:26.325 Cannot find device "nvmf_tgt_br2" 00:32:26.325 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:32:26.325 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:32:26.325 Cannot find device "nvmf_br" 00:32:26.325 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:32:26.325 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:32:26.325 Cannot find device "nvmf_init_if" 00:32:26.325 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:32:26.325 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:32:26.325 Cannot find device "nvmf_init_if2" 00:32:26.325 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:32:26.325 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:26.325 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:26.325 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:32:26.325 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:26.325 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:26.325 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:32:26.325 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:32:26.325 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:32:26.325 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:32:26.325 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:32:26.325 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:32:26.325 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:32:26.325 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:32:26.325 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:32:26.325 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:32:26.326 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:32:26.326 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:32:26.326 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:32:26.326 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:32:26.326 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:32:26.326 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:32:26.326 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:32:26.326 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:32:26.326 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:32:26.326 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:32:26.326 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:32:26.326 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:32:26.326 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:32:26.326 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:32:26.326 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:32:26.326 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:32:26.326 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:32:26.326 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:32:26.326 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:32:26.326 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:32:26.326 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:32:26.326 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:32:26.326 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:32:26.326 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:32:26.585 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:32:26.585 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:32:26.585 00:32:26.585 --- 10.0.0.3 ping statistics --- 00:32:26.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:26.585 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:32:26.585 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:32:26.585 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:32:26.585 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.064 ms 00:32:26.585 00:32:26.585 --- 10.0.0.4 ping statistics --- 00:32:26.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:26.585 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:32:26.585 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:32:26.585 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:26.585 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:32:26.585 00:32:26.585 --- 10.0.0.1 ping statistics --- 00:32:26.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:26.585 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:32:26.585 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:32:26.585 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:26.585 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:32:26.585 00:32:26.585 --- 10.0.0.2 ping statistics --- 00:32:26.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:26.585 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:32:26.585 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:26.585 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:32:26.585 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:26.585 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:26.585 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:26.585 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:26.585 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:26.585 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:26.585 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:26.585 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:32:26.585 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:26.585 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:26.585 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:26.585 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=63409 00:32:26.585 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:32:26.585 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 63409 00:32:26.585 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 63409 ']' 00:32:26.585 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:26.585 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:26.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:26.585 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:26.585 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:26.585 13:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:26.585 [2024-11-20 13:53:23.723242] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:32:26.585 [2024-11-20 13:53:23.723311] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:26.585 [2024-11-20 13:53:23.872380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:26.844 [2024-11-20 13:53:23.927174] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:26.844 [2024-11-20 13:53:23.927224] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:26.844 [2024-11-20 13:53:23.927231] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:26.844 [2024-11-20 13:53:23.927236] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:26.844 [2024-11-20 13:53:23.927241] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:26.844 [2024-11-20 13:53:23.927527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:26.844 [2024-11-20 13:53:24.002729] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:32:27.412 13:53:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:27.412 13:53:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:32:27.412 13:53:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:27.412 13:53:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:27.412 13:53:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:27.412 13:53:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:27.412 13:53:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:27.672 [2024-11-20 13:53:24.867603] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:27.672 13:53:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:32:27.672 13:53:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:27.672 13:53:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:27.672 13:53:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:27.672 ************************************ 00:32:27.672 START TEST lvs_grow_clean 00:32:27.672 ************************************ 00:32:27.672 13:53:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:32:27.672 13:53:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:32:27.672 13:53:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:32:27.672 13:53:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:32:27.672 13:53:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:32:27.672 13:53:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:32:27.672 13:53:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:32:27.672 13:53:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:32:27.672 13:53:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:32:27.672 13:53:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:27.931 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:32:27.931 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:32:28.190 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=ccaa43db-ce25-4902-a62f-480c949092d8 00:32:28.191 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:32:28.191 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ccaa43db-ce25-4902-a62f-480c949092d8 00:32:28.449 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:32:28.449 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:32:28.449 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ccaa43db-ce25-4902-a62f-480c949092d8 lvol 150 00:32:28.707 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=b114b4b5-243e-477f-baa5-5c1416a9cca2 00:32:28.707 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:32:28.707 13:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:32:28.964 [2024-11-20 13:53:26.177579] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:32:28.964 [2024-11-20 13:53:26.177654] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:32:28.964 true 00:32:28.964 13:53:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ccaa43db-ce25-4902-a62f-480c949092d8 00:32:28.964 13:53:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:32:29.225 13:53:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:32:29.225 13:53:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:29.482 13:53:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b114b4b5-243e-477f-baa5-5c1416a9cca2 00:32:29.740 13:53:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:32:29.999 [2024-11-20 13:53:27.128394] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:32:29.999 13:53:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:32:30.257 13:53:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:32:30.257 13:53:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63495 00:32:30.257 13:53:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:30.257 13:53:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63495 /var/tmp/bdevperf.sock 00:32:30.257 13:53:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 63495 ']' 00:32:30.257 13:53:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:30.257 13:53:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:30.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:30.257 13:53:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:30.257 13:53:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:30.257 13:53:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:32:30.257 [2024-11-20 13:53:27.425286] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:32:30.257 [2024-11-20 13:53:27.425362] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63495 ] 00:32:30.257 [2024-11-20 13:53:27.575841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:30.516 [2024-11-20 13:53:27.636150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:30.516 [2024-11-20 13:53:27.679860] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:32:31.133 13:53:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:31.133 13:53:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:32:31.133 13:53:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:32:31.392 Nvme0n1 00:32:31.392 13:53:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:32:31.651 [ 00:32:31.651 { 00:32:31.651 "name": "Nvme0n1", 00:32:31.651 "aliases": [ 00:32:31.651 "b114b4b5-243e-477f-baa5-5c1416a9cca2" 00:32:31.651 ], 00:32:31.651 "product_name": "NVMe disk", 00:32:31.651 "block_size": 4096, 00:32:31.651 "num_blocks": 38912, 00:32:31.651 "uuid": "b114b4b5-243e-477f-baa5-5c1416a9cca2", 00:32:31.651 "numa_id": -1, 00:32:31.651 "assigned_rate_limits": { 00:32:31.651 "rw_ios_per_sec": 0, 00:32:31.651 "rw_mbytes_per_sec": 0, 00:32:31.651 "r_mbytes_per_sec": 0, 00:32:31.651 "w_mbytes_per_sec": 0 00:32:31.651 }, 00:32:31.651 "claimed": false, 00:32:31.651 "zoned": false, 00:32:31.651 "supported_io_types": { 00:32:31.651 "read": true, 00:32:31.651 "write": true, 00:32:31.651 "unmap": true, 00:32:31.651 "flush": true, 00:32:31.651 "reset": true, 00:32:31.651 "nvme_admin": true, 00:32:31.651 "nvme_io": true, 00:32:31.651 "nvme_io_md": false, 00:32:31.651 "write_zeroes": true, 00:32:31.651 "zcopy": false, 00:32:31.651 "get_zone_info": false, 00:32:31.651 "zone_management": false, 00:32:31.651 "zone_append": false, 00:32:31.651 "compare": true, 00:32:31.651 "compare_and_write": true, 00:32:31.651 "abort": true, 00:32:31.651 "seek_hole": false, 00:32:31.651 "seek_data": false, 00:32:31.651 "copy": true, 00:32:31.651 "nvme_iov_md": false 00:32:31.651 }, 00:32:31.651 "memory_domains": [ 00:32:31.651 { 00:32:31.651 "dma_device_id": "system", 00:32:31.651 "dma_device_type": 1 00:32:31.651 } 00:32:31.651 ], 00:32:31.651 "driver_specific": { 00:32:31.651 "nvme": [ 00:32:31.651 { 00:32:31.651 "trid": { 00:32:31.651 "trtype": "TCP", 00:32:31.651 "adrfam": "IPv4", 00:32:31.651 "traddr": "10.0.0.3", 00:32:31.651 "trsvcid": "4420", 00:32:31.651 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:31.651 }, 00:32:31.651 "ctrlr_data": { 00:32:31.651 "cntlid": 1, 00:32:31.651 "vendor_id": "0x8086", 00:32:31.651 "model_number": "SPDK bdev Controller", 00:32:31.651 "serial_number": "SPDK0", 00:32:31.651 "firmware_revision": "25.01", 00:32:31.651 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:31.651 "oacs": { 00:32:31.651 "security": 0, 00:32:31.651 "format": 0, 00:32:31.651 "firmware": 0, 00:32:31.652 "ns_manage": 0 00:32:31.652 }, 00:32:31.652 "multi_ctrlr": true, 00:32:31.652 "ana_reporting": false 00:32:31.652 }, 00:32:31.652 "vs": { 00:32:31.652 "nvme_version": "1.3" 00:32:31.652 }, 00:32:31.652 "ns_data": { 00:32:31.652 "id": 1, 00:32:31.652 "can_share": true 00:32:31.652 } 00:32:31.652 } 00:32:31.652 ], 00:32:31.652 "mp_policy": "active_passive" 00:32:31.652 } 00:32:31.652 } 00:32:31.652 ] 00:32:31.652 13:53:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:31.652 13:53:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63519 00:32:31.652 13:53:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:32:31.910 Running I/O for 10 seconds... 00:32:32.847 Latency(us) 00:32:32.847 [2024-11-20T13:53:30.170Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:32.847 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:32.847 Nvme0n1 : 1.00 8573.00 33.49 0.00 0.00 0.00 0.00 0.00 00:32:32.847 [2024-11-20T13:53:30.170Z] =================================================================================================================== 00:32:32.847 [2024-11-20T13:53:30.170Z] Total : 8573.00 33.49 0.00 0.00 0.00 0.00 0.00 00:32:32.847 00:32:33.784 13:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ccaa43db-ce25-4902-a62f-480c949092d8 00:32:33.784 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:33.784 Nvme0n1 : 2.00 8858.50 34.60 0.00 0.00 0.00 0.00 0.00 00:32:33.784 [2024-11-20T13:53:31.107Z] =================================================================================================================== 00:32:33.784 [2024-11-20T13:53:31.107Z] Total : 8858.50 34.60 0.00 0.00 0.00 0.00 0.00 00:32:33.784 00:32:34.043 true 00:32:34.043 13:53:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ccaa43db-ce25-4902-a62f-480c949092d8 00:32:34.043 13:53:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:32:34.302 13:53:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:32:34.302 13:53:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:32:34.302 13:53:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 63519 00:32:34.873 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:34.873 Nvme0n1 : 3.00 8414.67 32.87 0.00 0.00 0.00 0.00 0.00 00:32:34.873 [2024-11-20T13:53:32.196Z] =================================================================================================================== 00:32:34.873 [2024-11-20T13:53:32.196Z] Total : 8414.67 32.87 0.00 0.00 0.00 0.00 0.00 00:32:34.873 00:32:35.811 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:35.811 Nvme0n1 : 4.00 8533.50 33.33 0.00 0.00 0.00 0.00 0.00 00:32:35.811 [2024-11-20T13:53:33.134Z] =================================================================================================================== 00:32:35.811 [2024-11-20T13:53:33.134Z] Total : 8533.50 33.33 0.00 0.00 0.00 0.00 0.00 00:32:35.811 00:32:36.756 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:36.756 Nvme0n1 : 5.00 8503.20 33.22 0.00 0.00 0.00 0.00 0.00 00:32:36.756 [2024-11-20T13:53:34.079Z] =================================================================================================================== 00:32:36.756 [2024-11-20T13:53:34.079Z] Total : 8503.20 33.22 0.00 0.00 0.00 0.00 0.00 00:32:36.756 00:32:38.140 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:38.140 Nvme0n1 : 6.00 8483.00 33.14 0.00 0.00 0.00 0.00 0.00 00:32:38.140 [2024-11-20T13:53:35.463Z] =================================================================================================================== 00:32:38.140 [2024-11-20T13:53:35.463Z] Total : 8483.00 33.14 0.00 0.00 0.00 0.00 0.00 00:32:38.140 00:32:39.079 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:39.079 Nvme0n1 : 7.00 8486.71 33.15 0.00 0.00 0.00 0.00 0.00 00:32:39.079 [2024-11-20T13:53:36.402Z] =================================================================================================================== 00:32:39.079 [2024-11-20T13:53:36.402Z] Total : 8486.71 33.15 0.00 0.00 0.00 0.00 0.00 00:32:39.079 00:32:40.016 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:40.016 Nvme0n1 : 8.00 8473.62 33.10 0.00 0.00 0.00 0.00 0.00 00:32:40.016 [2024-11-20T13:53:37.339Z] =================================================================================================================== 00:32:40.016 [2024-11-20T13:53:37.339Z] Total : 8473.62 33.10 0.00 0.00 0.00 0.00 0.00 00:32:40.016 00:32:40.954 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:40.954 Nvme0n1 : 9.00 8491.67 33.17 0.00 0.00 0.00 0.00 0.00 00:32:40.954 [2024-11-20T13:53:38.277Z] =================================================================================================================== 00:32:40.954 [2024-11-20T13:53:38.277Z] Total : 8491.67 33.17 0.00 0.00 0.00 0.00 0.00 00:32:40.954 00:32:41.892 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:41.892 Nvme0n1 : 10.00 8548.60 33.39 0.00 0.00 0.00 0.00 0.00 00:32:41.892 [2024-11-20T13:53:39.215Z] =================================================================================================================== 00:32:41.892 [2024-11-20T13:53:39.215Z] Total : 8548.60 33.39 0.00 0.00 0.00 0.00 0.00 00:32:41.892 00:32:41.892 00:32:41.892 Latency(us) 00:32:41.892 [2024-11-20T13:53:39.215Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:41.892 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:41.892 Nvme0n1 : 10.01 8550.73 33.40 0.00 0.00 14965.52 7498.01 166673.22 00:32:41.892 [2024-11-20T13:53:39.215Z] =================================================================================================================== 00:32:41.892 [2024-11-20T13:53:39.215Z] Total : 8550.73 33.40 0.00 0.00 14965.52 7498.01 166673.22 00:32:41.892 { 00:32:41.892 "results": [ 00:32:41.892 { 00:32:41.892 "job": "Nvme0n1", 00:32:41.892 "core_mask": "0x2", 00:32:41.892 "workload": "randwrite", 00:32:41.892 "status": "finished", 00:32:41.892 "queue_depth": 128, 00:32:41.892 "io_size": 4096, 00:32:41.892 "runtime": 10.012476, 00:32:41.892 "iops": 8550.732106623776, 00:32:41.892 "mibps": 33.40129729149913, 00:32:41.892 "io_failed": 0, 00:32:41.892 "io_timeout": 0, 00:32:41.892 "avg_latency_us": 14965.520886546428, 00:32:41.892 "min_latency_us": 7498.0052401746725, 00:32:41.892 "max_latency_us": 166673.21572052402 00:32:41.892 } 00:32:41.892 ], 00:32:41.892 "core_count": 1 00:32:41.892 } 00:32:41.892 13:53:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63495 00:32:41.892 13:53:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 63495 ']' 00:32:41.892 13:53:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 63495 00:32:41.892 13:53:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:32:41.892 13:53:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:41.892 13:53:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63495 00:32:41.892 killing process with pid 63495 00:32:41.892 Received shutdown signal, test time was about 10.000000 seconds 00:32:41.892 00:32:41.892 Latency(us) 00:32:41.892 [2024-11-20T13:53:39.215Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:41.892 [2024-11-20T13:53:39.215Z] =================================================================================================================== 00:32:41.892 [2024-11-20T13:53:39.215Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:41.892 13:53:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:41.892 13:53:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:41.892 13:53:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63495' 00:32:41.892 13:53:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 63495 00:32:41.892 13:53:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 63495 00:32:42.150 13:53:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:32:42.409 13:53:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:42.695 13:53:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ccaa43db-ce25-4902-a62f-480c949092d8 00:32:42.695 13:53:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:32:42.695 13:53:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:32:42.695 13:53:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:32:42.695 13:53:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:42.954 [2024-11-20 13:53:40.196290] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:32:42.954 13:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ccaa43db-ce25-4902-a62f-480c949092d8 00:32:42.954 13:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:32:42.954 13:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ccaa43db-ce25-4902-a62f-480c949092d8 00:32:42.954 13:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:42.954 13:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:42.954 13:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:42.954 13:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:42.954 13:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:42.954 13:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:42.954 13:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:42.954 13:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:32:42.954 13:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ccaa43db-ce25-4902-a62f-480c949092d8 00:32:43.213 request: 00:32:43.213 { 00:32:43.213 "uuid": "ccaa43db-ce25-4902-a62f-480c949092d8", 00:32:43.213 "method": "bdev_lvol_get_lvstores", 00:32:43.213 "req_id": 1 00:32:43.213 } 00:32:43.213 Got JSON-RPC error response 00:32:43.213 response: 00:32:43.213 { 00:32:43.213 "code": -19, 00:32:43.213 "message": "No such device" 00:32:43.213 } 00:32:43.213 13:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:32:43.213 13:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:43.213 13:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:43.213 13:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:43.213 13:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:43.471 aio_bdev 00:32:43.471 13:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b114b4b5-243e-477f-baa5-5c1416a9cca2 00:32:43.471 13:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=b114b4b5-243e-477f-baa5-5c1416a9cca2 00:32:43.471 13:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:43.471 13:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:32:43.471 13:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:43.471 13:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:43.471 13:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:43.729 13:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b114b4b5-243e-477f-baa5-5c1416a9cca2 -t 2000 00:32:43.989 [ 00:32:43.989 { 00:32:43.989 "name": "b114b4b5-243e-477f-baa5-5c1416a9cca2", 00:32:43.989 "aliases": [ 00:32:43.989 "lvs/lvol" 00:32:43.989 ], 00:32:43.989 "product_name": "Logical Volume", 00:32:43.989 "block_size": 4096, 00:32:43.989 "num_blocks": 38912, 00:32:43.989 "uuid": "b114b4b5-243e-477f-baa5-5c1416a9cca2", 00:32:43.989 "assigned_rate_limits": { 00:32:43.989 "rw_ios_per_sec": 0, 00:32:43.989 "rw_mbytes_per_sec": 0, 00:32:43.989 "r_mbytes_per_sec": 0, 00:32:43.989 "w_mbytes_per_sec": 0 00:32:43.989 }, 00:32:43.989 "claimed": false, 00:32:43.989 "zoned": false, 00:32:43.989 "supported_io_types": { 00:32:43.989 "read": true, 00:32:43.989 "write": true, 00:32:43.989 "unmap": true, 00:32:43.989 "flush": false, 00:32:43.989 "reset": true, 00:32:43.989 "nvme_admin": false, 00:32:43.989 "nvme_io": false, 00:32:43.989 "nvme_io_md": false, 00:32:43.989 "write_zeroes": true, 00:32:43.989 "zcopy": false, 00:32:43.989 "get_zone_info": false, 00:32:43.989 "zone_management": false, 00:32:43.989 "zone_append": false, 00:32:43.989 "compare": false, 00:32:43.989 "compare_and_write": false, 00:32:43.989 "abort": false, 00:32:43.989 "seek_hole": true, 00:32:43.989 "seek_data": true, 00:32:43.989 "copy": false, 00:32:43.989 "nvme_iov_md": false 00:32:43.989 }, 00:32:43.989 "driver_specific": { 00:32:43.989 "lvol": { 00:32:43.989 "lvol_store_uuid": "ccaa43db-ce25-4902-a62f-480c949092d8", 00:32:43.989 "base_bdev": "aio_bdev", 00:32:43.989 "thin_provision": false, 00:32:43.989 "num_allocated_clusters": 38, 00:32:43.989 "snapshot": false, 00:32:43.989 "clone": false, 00:32:43.989 "esnap_clone": false 00:32:43.989 } 00:32:43.989 } 00:32:43.989 } 00:32:43.989 ] 00:32:43.989 13:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:32:43.989 13:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:32:43.989 13:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ccaa43db-ce25-4902-a62f-480c949092d8 00:32:44.248 13:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:32:44.248 13:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ccaa43db-ce25-4902-a62f-480c949092d8 00:32:44.248 13:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:32:44.507 13:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:32:44.507 13:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete b114b4b5-243e-477f-baa5-5c1416a9cca2 00:32:44.766 13:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ccaa43db-ce25-4902-a62f-480c949092d8 00:32:45.025 13:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:45.285 13:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:32:45.544 ************************************ 00:32:45.544 END TEST lvs_grow_clean 00:32:45.544 ************************************ 00:32:45.544 00:32:45.544 real 0m17.898s 00:32:45.544 user 0m16.835s 00:32:45.544 sys 0m2.391s 00:32:45.544 13:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:45.544 13:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:32:45.544 13:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:32:45.544 13:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:45.544 13:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:45.544 13:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:45.544 ************************************ 00:32:45.544 START TEST lvs_grow_dirty 00:32:45.544 ************************************ 00:32:45.544 13:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:32:45.544 13:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:32:45.544 13:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:32:45.544 13:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:32:45.544 13:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:32:45.544 13:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:32:45.544 13:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:32:45.544 13:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:32:45.803 13:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:32:45.803 13:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:45.803 13:53:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:32:45.803 13:53:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:32:46.063 13:53:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=86734f25-444f-4ffd-9df9-d756b83cd2d0 00:32:46.063 13:53:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86734f25-444f-4ffd-9df9-d756b83cd2d0 00:32:46.063 13:53:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:32:46.348 13:53:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:32:46.348 13:53:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:32:46.348 13:53:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 86734f25-444f-4ffd-9df9-d756b83cd2d0 lvol 150 00:32:46.606 13:53:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=14db3fb5-ec95-440c-bf43-578914053a55 00:32:46.606 13:53:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:32:46.606 13:53:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:32:46.866 [2024-11-20 13:53:44.027715] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:32:46.866 [2024-11-20 13:53:44.027783] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:32:46.866 true 00:32:46.866 13:53:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86734f25-444f-4ffd-9df9-d756b83cd2d0 00:32:46.866 13:53:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:32:47.124 13:53:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:32:47.124 13:53:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:47.383 13:53:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 14db3fb5-ec95-440c-bf43-578914053a55 00:32:47.643 13:53:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:32:47.643 [2024-11-20 13:53:44.938393] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:32:47.644 13:53:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:32:47.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:47.902 13:53:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63765 00:32:47.902 13:53:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:47.902 13:53:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63765 /var/tmp/bdevperf.sock 00:32:47.902 13:53:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 63765 ']' 00:32:47.902 13:53:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:47.902 13:53:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:47.902 13:53:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:47.902 13:53:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:47.902 13:53:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:47.902 13:53:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:32:48.161 [2024-11-20 13:53:45.224583] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:32:48.161 [2024-11-20 13:53:45.224654] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63765 ] 00:32:48.161 [2024-11-20 13:53:45.371986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:48.161 [2024-11-20 13:53:45.430548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:48.161 [2024-11-20 13:53:45.473264] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:32:49.100 13:53:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:49.100 13:53:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:32:49.100 13:53:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:32:49.100 Nvme0n1 00:32:49.100 13:53:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:32:49.360 [ 00:32:49.360 { 00:32:49.360 "name": "Nvme0n1", 00:32:49.360 "aliases": [ 00:32:49.360 "14db3fb5-ec95-440c-bf43-578914053a55" 00:32:49.360 ], 00:32:49.360 "product_name": "NVMe disk", 00:32:49.360 "block_size": 4096, 00:32:49.360 "num_blocks": 38912, 00:32:49.360 "uuid": "14db3fb5-ec95-440c-bf43-578914053a55", 00:32:49.360 "numa_id": -1, 00:32:49.360 "assigned_rate_limits": { 00:32:49.360 "rw_ios_per_sec": 0, 00:32:49.360 "rw_mbytes_per_sec": 0, 00:32:49.360 "r_mbytes_per_sec": 0, 00:32:49.360 "w_mbytes_per_sec": 0 00:32:49.360 }, 00:32:49.360 "claimed": false, 00:32:49.360 "zoned": false, 00:32:49.360 "supported_io_types": { 00:32:49.360 "read": true, 00:32:49.360 "write": true, 00:32:49.360 "unmap": true, 00:32:49.360 "flush": true, 00:32:49.360 "reset": true, 00:32:49.360 "nvme_admin": true, 00:32:49.360 "nvme_io": true, 00:32:49.360 "nvme_io_md": false, 00:32:49.360 "write_zeroes": true, 00:32:49.360 "zcopy": false, 00:32:49.360 "get_zone_info": false, 00:32:49.360 "zone_management": false, 00:32:49.360 "zone_append": false, 00:32:49.360 "compare": true, 00:32:49.360 "compare_and_write": true, 00:32:49.360 "abort": true, 00:32:49.360 "seek_hole": false, 00:32:49.360 "seek_data": false, 00:32:49.360 "copy": true, 00:32:49.360 "nvme_iov_md": false 00:32:49.360 }, 00:32:49.360 "memory_domains": [ 00:32:49.360 { 00:32:49.360 "dma_device_id": "system", 00:32:49.360 "dma_device_type": 1 00:32:49.360 } 00:32:49.360 ], 00:32:49.360 "driver_specific": { 00:32:49.360 "nvme": [ 00:32:49.360 { 00:32:49.360 "trid": { 00:32:49.360 "trtype": "TCP", 00:32:49.360 "adrfam": "IPv4", 00:32:49.360 "traddr": "10.0.0.3", 00:32:49.360 "trsvcid": "4420", 00:32:49.360 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:49.360 }, 00:32:49.360 "ctrlr_data": { 00:32:49.360 "cntlid": 1, 00:32:49.360 "vendor_id": "0x8086", 00:32:49.360 "model_number": "SPDK bdev Controller", 00:32:49.360 "serial_number": "SPDK0", 00:32:49.360 "firmware_revision": "25.01", 00:32:49.360 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:49.360 "oacs": { 00:32:49.360 "security": 0, 00:32:49.360 "format": 0, 00:32:49.360 "firmware": 0, 00:32:49.360 "ns_manage": 0 00:32:49.360 }, 00:32:49.360 "multi_ctrlr": true, 00:32:49.360 "ana_reporting": false 00:32:49.360 }, 00:32:49.360 "vs": { 00:32:49.360 "nvme_version": "1.3" 00:32:49.360 }, 00:32:49.360 "ns_data": { 00:32:49.360 "id": 1, 00:32:49.360 "can_share": true 00:32:49.360 } 00:32:49.360 } 00:32:49.360 ], 00:32:49.360 "mp_policy": "active_passive" 00:32:49.360 } 00:32:49.360 } 00:32:49.360 ] 00:32:49.360 13:53:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63787 00:32:49.360 13:53:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:32:49.360 13:53:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:49.617 Running I/O for 10 seconds... 00:32:50.549 Latency(us) 00:32:50.549 [2024-11-20T13:53:47.872Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:50.549 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:50.549 Nvme0n1 : 1.00 7216.00 28.19 0.00 0.00 0.00 0.00 0.00 00:32:50.550 [2024-11-20T13:53:47.873Z] =================================================================================================================== 00:32:50.550 [2024-11-20T13:53:47.873Z] Total : 7216.00 28.19 0.00 0.00 0.00 0.00 0.00 00:32:50.550 00:32:51.482 13:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 86734f25-444f-4ffd-9df9-d756b83cd2d0 00:32:51.482 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:51.482 Nvme0n1 : 2.00 7164.00 27.98 0.00 0.00 0.00 0.00 0.00 00:32:51.482 [2024-11-20T13:53:48.805Z] =================================================================================================================== 00:32:51.482 [2024-11-20T13:53:48.805Z] Total : 7164.00 27.98 0.00 0.00 0.00 0.00 0.00 00:32:51.482 00:32:51.740 true 00:32:51.740 13:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86734f25-444f-4ffd-9df9-d756b83cd2d0 00:32:51.740 13:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:32:51.999 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:32:51.999 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:32:51.999 13:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 63787 00:32:52.568 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:52.568 Nvme0n1 : 3.00 7527.67 29.40 0.00 0.00 0.00 0.00 0.00 00:32:52.568 [2024-11-20T13:53:49.891Z] =================================================================================================================== 00:32:52.568 [2024-11-20T13:53:49.891Z] Total : 7527.67 29.40 0.00 0.00 0.00 0.00 0.00 00:32:52.568 00:32:53.501 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:53.501 Nvme0n1 : 4.00 7708.25 30.11 0.00 0.00 0.00 0.00 0.00 00:32:53.501 [2024-11-20T13:53:50.824Z] =================================================================================================================== 00:32:53.501 [2024-11-20T13:53:50.824Z] Total : 7708.25 30.11 0.00 0.00 0.00 0.00 0.00 00:32:53.501 00:32:54.509 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:54.509 Nvme0n1 : 5.00 7614.40 29.74 0.00 0.00 0.00 0.00 0.00 00:32:54.509 [2024-11-20T13:53:51.832Z] =================================================================================================================== 00:32:54.509 [2024-11-20T13:53:51.832Z] Total : 7614.40 29.74 0.00 0.00 0.00 0.00 0.00 00:32:54.509 00:32:55.445 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:55.445 Nvme0n1 : 6.00 7738.83 30.23 0.00 0.00 0.00 0.00 0.00 00:32:55.445 [2024-11-20T13:53:52.768Z] =================================================================================================================== 00:32:55.445 [2024-11-20T13:53:52.768Z] Total : 7738.83 30.23 0.00 0.00 0.00 0.00 0.00 00:32:55.445 00:32:56.822 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:56.822 Nvme0n1 : 7.00 7921.43 30.94 0.00 0.00 0.00 0.00 0.00 00:32:56.822 [2024-11-20T13:53:54.145Z] =================================================================================================================== 00:32:56.822 [2024-11-20T13:53:54.145Z] Total : 7921.43 30.94 0.00 0.00 0.00 0.00 0.00 00:32:56.822 00:32:57.756 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:57.756 Nvme0n1 : 8.00 7847.38 30.65 0.00 0.00 0.00 0.00 0.00 00:32:57.756 [2024-11-20T13:53:55.079Z] =================================================================================================================== 00:32:57.756 [2024-11-20T13:53:55.079Z] Total : 7847.38 30.65 0.00 0.00 0.00 0.00 0.00 00:32:57.756 00:32:58.697 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:58.697 Nvme0n1 : 9.00 7892.67 30.83 0.00 0.00 0.00 0.00 0.00 00:32:58.697 [2024-11-20T13:53:56.020Z] =================================================================================================================== 00:32:58.697 [2024-11-20T13:53:56.020Z] Total : 7892.67 30.83 0.00 0.00 0.00 0.00 0.00 00:32:58.697 00:32:59.643 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:59.643 Nvme0n1 : 10.00 7928.90 30.97 0.00 0.00 0.00 0.00 0.00 00:32:59.643 [2024-11-20T13:53:56.966Z] =================================================================================================================== 00:32:59.643 [2024-11-20T13:53:56.966Z] Total : 7928.90 30.97 0.00 0.00 0.00 0.00 0.00 00:32:59.643 00:32:59.643 00:32:59.643 Latency(us) 00:32:59.643 [2024-11-20T13:53:56.966Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:59.643 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:59.643 Nvme0n1 : 10.00 7925.18 30.96 0.00 0.00 16143.02 4893.74 143778.54 00:32:59.643 [2024-11-20T13:53:56.966Z] =================================================================================================================== 00:32:59.643 [2024-11-20T13:53:56.966Z] Total : 7925.18 30.96 0.00 0.00 16143.02 4893.74 143778.54 00:32:59.643 { 00:32:59.643 "results": [ 00:32:59.643 { 00:32:59.643 "job": "Nvme0n1", 00:32:59.643 "core_mask": "0x2", 00:32:59.643 "workload": "randwrite", 00:32:59.643 "status": "finished", 00:32:59.643 "queue_depth": 128, 00:32:59.643 "io_size": 4096, 00:32:59.643 "runtime": 10.004818, 00:32:59.643 "iops": 7925.181647482243, 00:32:59.643 "mibps": 30.95774081047751, 00:32:59.643 "io_failed": 0, 00:32:59.643 "io_timeout": 0, 00:32:59.643 "avg_latency_us": 16143.018340699471, 00:32:59.643 "min_latency_us": 4893.736244541485, 00:32:59.643 "max_latency_us": 143778.54323144106 00:32:59.643 } 00:32:59.643 ], 00:32:59.643 "core_count": 1 00:32:59.643 } 00:32:59.643 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63765 00:32:59.643 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 63765 ']' 00:32:59.643 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 63765 00:32:59.643 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:32:59.643 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:59.643 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63765 00:32:59.643 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:59.643 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:59.643 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63765' 00:32:59.643 killing process with pid 63765 00:32:59.643 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 63765 00:32:59.643 Received shutdown signal, test time was about 10.000000 seconds 00:32:59.643 00:32:59.643 Latency(us) 00:32:59.643 [2024-11-20T13:53:56.966Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:59.643 [2024-11-20T13:53:56.966Z] =================================================================================================================== 00:32:59.643 [2024-11-20T13:53:56.966Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:59.643 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 63765 00:32:59.903 13:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:33:00.163 13:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:00.163 13:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86734f25-444f-4ffd-9df9-d756b83cd2d0 00:33:00.163 13:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:33:00.423 13:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:33:00.423 13:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:33:00.423 13:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 63409 00:33:00.423 13:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 63409 00:33:00.423 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 63409 Killed "${NVMF_APP[@]}" "$@" 00:33:00.423 13:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:33:00.423 13:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:33:00.423 13:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:00.423 13:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:00.423 13:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:00.683 13:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=63921 00:33:00.683 13:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:33:00.683 13:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 63921 00:33:00.683 13:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 63921 ']' 00:33:00.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:00.683 13:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:00.683 13:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:00.683 13:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:00.683 13:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:00.683 13:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:00.683 [2024-11-20 13:53:57.803341] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:33:00.683 [2024-11-20 13:53:57.803412] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:00.683 [2024-11-20 13:53:57.953890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:00.942 [2024-11-20 13:53:58.009300] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:00.942 [2024-11-20 13:53:58.009436] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:00.942 [2024-11-20 13:53:58.009481] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:00.942 [2024-11-20 13:53:58.009533] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:00.942 [2024-11-20 13:53:58.009553] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:00.942 [2024-11-20 13:53:58.009879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:00.942 [2024-11-20 13:53:58.085958] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:33:01.511 13:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:01.511 13:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:33:01.511 13:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:01.511 13:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:01.511 13:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:01.511 13:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:01.511 13:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:01.772 [2024-11-20 13:53:58.983319] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:33:01.772 [2024-11-20 13:53:58.983677] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:33:01.772 [2024-11-20 13:53:58.983886] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:33:01.772 13:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:33:01.772 13:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 14db3fb5-ec95-440c-bf43-578914053a55 00:33:01.772 13:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=14db3fb5-ec95-440c-bf43-578914053a55 00:33:01.772 13:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:01.772 13:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:33:01.772 13:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:01.772 13:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:01.772 13:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:02.030 13:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 14db3fb5-ec95-440c-bf43-578914053a55 -t 2000 00:33:02.289 [ 00:33:02.289 { 00:33:02.289 "name": "14db3fb5-ec95-440c-bf43-578914053a55", 00:33:02.289 "aliases": [ 00:33:02.289 "lvs/lvol" 00:33:02.289 ], 00:33:02.289 "product_name": "Logical Volume", 00:33:02.289 "block_size": 4096, 00:33:02.289 "num_blocks": 38912, 00:33:02.289 "uuid": "14db3fb5-ec95-440c-bf43-578914053a55", 00:33:02.289 "assigned_rate_limits": { 00:33:02.289 "rw_ios_per_sec": 0, 00:33:02.289 "rw_mbytes_per_sec": 0, 00:33:02.289 "r_mbytes_per_sec": 0, 00:33:02.289 "w_mbytes_per_sec": 0 00:33:02.289 }, 00:33:02.289 "claimed": false, 00:33:02.289 "zoned": false, 00:33:02.289 "supported_io_types": { 00:33:02.289 "read": true, 00:33:02.289 "write": true, 00:33:02.289 "unmap": true, 00:33:02.289 "flush": false, 00:33:02.289 "reset": true, 00:33:02.289 "nvme_admin": false, 00:33:02.289 "nvme_io": false, 00:33:02.289 "nvme_io_md": false, 00:33:02.289 "write_zeroes": true, 00:33:02.289 "zcopy": false, 00:33:02.289 "get_zone_info": false, 00:33:02.289 "zone_management": false, 00:33:02.289 "zone_append": false, 00:33:02.289 "compare": false, 00:33:02.289 "compare_and_write": false, 00:33:02.289 "abort": false, 00:33:02.289 "seek_hole": true, 00:33:02.289 "seek_data": true, 00:33:02.289 "copy": false, 00:33:02.289 "nvme_iov_md": false 00:33:02.289 }, 00:33:02.289 "driver_specific": { 00:33:02.289 "lvol": { 00:33:02.289 "lvol_store_uuid": "86734f25-444f-4ffd-9df9-d756b83cd2d0", 00:33:02.289 "base_bdev": "aio_bdev", 00:33:02.289 "thin_provision": false, 00:33:02.289 "num_allocated_clusters": 38, 00:33:02.289 "snapshot": false, 00:33:02.289 "clone": false, 00:33:02.289 "esnap_clone": false 00:33:02.289 } 00:33:02.289 } 00:33:02.289 } 00:33:02.289 ] 00:33:02.289 13:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:33:02.289 13:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86734f25-444f-4ffd-9df9-d756b83cd2d0 00:33:02.289 13:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:33:02.548 13:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:33:02.548 13:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86734f25-444f-4ffd-9df9-d756b83cd2d0 00:33:02.548 13:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:33:02.806 13:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:33:02.806 13:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:02.806 [2024-11-20 13:54:00.118646] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:33:03.065 13:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86734f25-444f-4ffd-9df9-d756b83cd2d0 00:33:03.065 13:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:33:03.065 13:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86734f25-444f-4ffd-9df9-d756b83cd2d0 00:33:03.065 13:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:03.065 13:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:03.065 13:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:03.065 13:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:03.065 13:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:03.065 13:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:03.065 13:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:03.065 13:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:33:03.065 13:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86734f25-444f-4ffd-9df9-d756b83cd2d0 00:33:03.065 request: 00:33:03.065 { 00:33:03.065 "uuid": "86734f25-444f-4ffd-9df9-d756b83cd2d0", 00:33:03.065 "method": "bdev_lvol_get_lvstores", 00:33:03.065 "req_id": 1 00:33:03.065 } 00:33:03.065 Got JSON-RPC error response 00:33:03.065 response: 00:33:03.065 { 00:33:03.065 "code": -19, 00:33:03.065 "message": "No such device" 00:33:03.065 } 00:33:03.065 13:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:33:03.065 13:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:03.065 13:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:03.065 13:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:03.065 13:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:03.334 aio_bdev 00:33:03.334 13:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 14db3fb5-ec95-440c-bf43-578914053a55 00:33:03.334 13:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=14db3fb5-ec95-440c-bf43-578914053a55 00:33:03.334 13:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:03.334 13:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:33:03.334 13:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:03.334 13:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:03.334 13:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:03.593 13:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 14db3fb5-ec95-440c-bf43-578914053a55 -t 2000 00:33:03.853 [ 00:33:03.853 { 00:33:03.853 "name": "14db3fb5-ec95-440c-bf43-578914053a55", 00:33:03.853 "aliases": [ 00:33:03.853 "lvs/lvol" 00:33:03.853 ], 00:33:03.853 "product_name": "Logical Volume", 00:33:03.853 "block_size": 4096, 00:33:03.853 "num_blocks": 38912, 00:33:03.853 "uuid": "14db3fb5-ec95-440c-bf43-578914053a55", 00:33:03.853 "assigned_rate_limits": { 00:33:03.853 "rw_ios_per_sec": 0, 00:33:03.853 "rw_mbytes_per_sec": 0, 00:33:03.853 "r_mbytes_per_sec": 0, 00:33:03.853 "w_mbytes_per_sec": 0 00:33:03.853 }, 00:33:03.853 "claimed": false, 00:33:03.853 "zoned": false, 00:33:03.853 "supported_io_types": { 00:33:03.853 "read": true, 00:33:03.853 "write": true, 00:33:03.853 "unmap": true, 00:33:03.853 "flush": false, 00:33:03.853 "reset": true, 00:33:03.853 "nvme_admin": false, 00:33:03.853 "nvme_io": false, 00:33:03.853 "nvme_io_md": false, 00:33:03.853 "write_zeroes": true, 00:33:03.853 "zcopy": false, 00:33:03.853 "get_zone_info": false, 00:33:03.853 "zone_management": false, 00:33:03.853 "zone_append": false, 00:33:03.853 "compare": false, 00:33:03.853 "compare_and_write": false, 00:33:03.853 "abort": false, 00:33:03.853 "seek_hole": true, 00:33:03.853 "seek_data": true, 00:33:03.853 "copy": false, 00:33:03.853 "nvme_iov_md": false 00:33:03.853 }, 00:33:03.853 "driver_specific": { 00:33:03.853 "lvol": { 00:33:03.853 "lvol_store_uuid": "86734f25-444f-4ffd-9df9-d756b83cd2d0", 00:33:03.853 "base_bdev": "aio_bdev", 00:33:03.853 "thin_provision": false, 00:33:03.853 "num_allocated_clusters": 38, 00:33:03.853 "snapshot": false, 00:33:03.853 "clone": false, 00:33:03.853 "esnap_clone": false 00:33:03.853 } 00:33:03.853 } 00:33:03.853 } 00:33:03.853 ] 00:33:03.853 13:54:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:33:03.853 13:54:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:33:03.853 13:54:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86734f25-444f-4ffd-9df9-d756b83cd2d0 00:33:04.113 13:54:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:33:04.113 13:54:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86734f25-444f-4ffd-9df9-d756b83cd2d0 00:33:04.113 13:54:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:33:04.373 13:54:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:33:04.373 13:54:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 14db3fb5-ec95-440c-bf43-578914053a55 00:33:04.373 13:54:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 86734f25-444f-4ffd-9df9-d756b83cd2d0 00:33:04.632 13:54:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:04.891 13:54:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:33:05.460 00:33:05.460 real 0m19.652s 00:33:05.460 user 0m41.537s 00:33:05.460 sys 0m7.133s 00:33:05.460 13:54:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:05.460 13:54:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:05.460 ************************************ 00:33:05.460 END TEST lvs_grow_dirty 00:33:05.460 ************************************ 00:33:05.460 13:54:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:33:05.460 13:54:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:33:05.460 13:54:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:33:05.460 13:54:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:33:05.460 13:54:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:33:05.460 13:54:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:33:05.460 13:54:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:33:05.460 13:54:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:33:05.460 13:54:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:33:05.460 nvmf_trace.0 00:33:05.460 13:54:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:33:05.460 13:54:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:33:05.460 13:54:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:05.460 13:54:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:33:06.029 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:06.029 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:33:06.029 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:06.029 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:06.029 rmmod nvme_tcp 00:33:06.029 rmmod nvme_fabrics 00:33:06.029 rmmod nvme_keyring 00:33:06.029 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:06.029 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:33:06.029 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:33:06.029 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 63921 ']' 00:33:06.029 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 63921 00:33:06.029 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 63921 ']' 00:33:06.029 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 63921 00:33:06.029 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:33:06.029 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:06.029 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63921 00:33:06.029 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:06.029 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:06.029 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63921' 00:33:06.029 killing process with pid 63921 00:33:06.029 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 63921 00:33:06.029 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 63921 00:33:06.289 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:06.289 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:06.289 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:06.289 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:33:06.289 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:33:06.289 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:33:06.289 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:06.289 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:06.289 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:33:06.289 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:33:06.289 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:33:06.289 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:33:06.289 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:33:06.289 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:33:06.289 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:33:06.289 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:33:06.289 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:33:06.289 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:33:06.289 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:33:06.289 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:33:06.289 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:06.547 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:06.547 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:33:06.547 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:06.547 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:06.548 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:06.548 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:33:06.548 00:33:06.548 real 0m40.607s 00:33:06.548 user 1m4.599s 00:33:06.548 sys 0m10.561s 00:33:06.548 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:06.548 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:06.548 ************************************ 00:33:06.548 END TEST nvmf_lvs_grow 00:33:06.548 ************************************ 00:33:06.548 13:54:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:33:06.548 13:54:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:06.548 13:54:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:06.548 13:54:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:33:06.548 ************************************ 00:33:06.548 START TEST nvmf_bdev_io_wait 00:33:06.548 ************************************ 00:33:06.548 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:33:06.808 * Looking for test storage... 00:33:06.808 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:33:06.808 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:06.808 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:33:06.808 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:06.808 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:06.808 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:06.808 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:06.808 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:06.808 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:33:06.808 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:33:06.808 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:33:06.808 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:33:06.808 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:33:06.808 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:33:06.808 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:33:06.808 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:06.808 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:33:06.808 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:33:06.808 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:06.808 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:06.808 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:33:06.808 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:33:06.808 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:06.808 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:33:06.808 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:33:06.808 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:33:06.808 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:33:06.808 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:06.808 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:33:06.808 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:33:06.808 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:06.808 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:06.808 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:33:06.808 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:06.808 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:06.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:06.808 --rc genhtml_branch_coverage=1 00:33:06.808 --rc genhtml_function_coverage=1 00:33:06.808 --rc genhtml_legend=1 00:33:06.808 --rc geninfo_all_blocks=1 00:33:06.808 --rc geninfo_unexecuted_blocks=1 00:33:06.808 00:33:06.808 ' 00:33:06.808 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:06.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:06.808 --rc genhtml_branch_coverage=1 00:33:06.808 --rc genhtml_function_coverage=1 00:33:06.808 --rc genhtml_legend=1 00:33:06.808 --rc geninfo_all_blocks=1 00:33:06.808 --rc geninfo_unexecuted_blocks=1 00:33:06.808 00:33:06.808 ' 00:33:06.808 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:06.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:06.808 --rc genhtml_branch_coverage=1 00:33:06.808 --rc genhtml_function_coverage=1 00:33:06.808 --rc genhtml_legend=1 00:33:06.808 --rc geninfo_all_blocks=1 00:33:06.808 --rc geninfo_unexecuted_blocks=1 00:33:06.808 00:33:06.808 ' 00:33:06.808 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:06.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:06.808 --rc genhtml_branch_coverage=1 00:33:06.808 --rc genhtml_function_coverage=1 00:33:06.808 --rc genhtml_legend=1 00:33:06.808 --rc geninfo_all_blocks=1 00:33:06.808 --rc geninfo_unexecuted_blocks=1 00:33:06.808 00:33:06.808 ' 00:33:06.808 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:06.808 13:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:33:06.808 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:06.808 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:06.808 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:06.808 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:06.808 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:06.808 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:06.808 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:06.808 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:06.808 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:06.808 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:06.808 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:33:06.808 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=105ec898-1662-46bd-85be-b241e399edb9 00:33:06.808 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:06.808 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:06.808 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:33:06.808 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:06.808 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:06.808 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:33:06.808 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:06.808 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:06.809 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:06.809 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:06.809 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:06.809 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:06.809 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:33:06.809 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:06.809 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:33:06.809 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:06.809 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:06.809 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:06.809 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:06.809 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:06.809 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:06.809 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:06.809 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:06.809 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:06.809 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:06.809 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:06.809 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:06.809 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:33:06.809 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:06.809 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:06.809 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:06.809 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:06.809 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:06.809 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:06.809 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:06.809 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:06.809 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:33:06.809 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:33:06.809 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:33:06.809 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:33:06.809 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:33:06.809 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:33:06.809 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:06.809 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:33:06.809 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:33:06.809 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:33:06.809 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:06.809 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:33:06.809 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:33:06.809 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:33:06.809 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:33:06.809 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:33:06.809 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:33:06.809 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:06.809 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:33:06.809 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:33:06.809 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:33:06.809 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:33:06.809 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:33:06.809 Cannot find device "nvmf_init_br" 00:33:06.809 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:33:06.809 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:33:06.809 Cannot find device "nvmf_init_br2" 00:33:06.809 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:33:06.809 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:33:06.809 Cannot find device "nvmf_tgt_br" 00:33:06.809 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:33:06.809 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:33:06.809 Cannot find device "nvmf_tgt_br2" 00:33:06.809 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:33:06.809 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:33:07.069 Cannot find device "nvmf_init_br" 00:33:07.069 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:33:07.069 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:33:07.069 Cannot find device "nvmf_init_br2" 00:33:07.069 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:33:07.069 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:33:07.069 Cannot find device "nvmf_tgt_br" 00:33:07.069 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:33:07.069 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:33:07.069 Cannot find device "nvmf_tgt_br2" 00:33:07.069 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:33:07.069 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:33:07.069 Cannot find device "nvmf_br" 00:33:07.069 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:33:07.069 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:33:07.069 Cannot find device "nvmf_init_if" 00:33:07.069 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:33:07.069 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:33:07.069 Cannot find device "nvmf_init_if2" 00:33:07.069 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:33:07.069 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:07.069 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:07.069 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:33:07.069 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:07.069 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:07.069 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:33:07.069 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:33:07.069 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:33:07.069 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:33:07.069 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:33:07.069 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:33:07.069 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:33:07.069 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:33:07.069 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:33:07.069 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:33:07.069 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:33:07.069 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:33:07.069 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:33:07.069 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:33:07.069 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:33:07.069 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:33:07.069 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:33:07.069 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:33:07.069 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:33:07.069 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:33:07.330 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:33:07.330 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:33:07.330 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:33:07.330 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:33:07.330 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:33:07.330 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:33:07.330 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:33:07.330 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:33:07.330 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:33:07.330 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:33:07.330 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:33:07.330 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:33:07.330 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:33:07.330 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:33:07.330 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:33:07.330 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.095 ms 00:33:07.330 00:33:07.330 --- 10.0.0.3 ping statistics --- 00:33:07.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:07.330 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:33:07.330 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:33:07.330 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:33:07.330 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.080 ms 00:33:07.330 00:33:07.330 --- 10.0.0.4 ping statistics --- 00:33:07.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:07.330 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:33:07.330 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:33:07.330 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:07.330 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:33:07.330 00:33:07.330 --- 10.0.0.1 ping statistics --- 00:33:07.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:07.330 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:33:07.330 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:33:07.330 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:07.330 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.151 ms 00:33:07.330 00:33:07.330 --- 10.0.0.2 ping statistics --- 00:33:07.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:07.330 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:33:07.330 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:07.330 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:33:07.330 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:07.330 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:07.330 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:07.330 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:07.330 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:07.330 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:07.330 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:07.330 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:33:07.330 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:07.330 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:07.330 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:07.330 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=64290 00:33:07.330 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:33:07.330 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 64290 00:33:07.330 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 64290 ']' 00:33:07.330 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:07.330 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:07.330 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:07.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:07.330 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:07.330 13:54:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:07.330 [2024-11-20 13:54:04.603261] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:33:07.330 [2024-11-20 13:54:04.603877] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:07.590 [2024-11-20 13:54:04.755652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:07.590 [2024-11-20 13:54:04.818027] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:07.590 [2024-11-20 13:54:04.818161] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:07.590 [2024-11-20 13:54:04.818213] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:07.590 [2024-11-20 13:54:04.818295] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:07.590 [2024-11-20 13:54:04.818323] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:07.590 [2024-11-20 13:54:04.819609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:07.590 [2024-11-20 13:54:04.821791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:07.590 [2024-11-20 13:54:04.821855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:07.590 [2024-11-20 13:54:04.821856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:08.536 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:08.536 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:33:08.536 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:08.536 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:08.536 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:08.536 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:08.536 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:33:08.536 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.536 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:08.536 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.536 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:33:08.536 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.536 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:08.536 [2024-11-20 13:54:05.701691] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:33:08.536 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.536 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:08.536 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.536 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:08.536 [2024-11-20 13:54:05.714732] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:08.536 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.537 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:08.537 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.537 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:08.537 Malloc0 00:33:08.537 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.537 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:08.537 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.537 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:08.537 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.537 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:08.537 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.537 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:08.537 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.537 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:33:08.537 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.537 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:08.537 [2024-11-20 13:54:05.773340] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:33:08.537 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.537 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=64325 00:33:08.537 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:33:08.537 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:33:08.537 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=64327 00:33:08.537 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:08.537 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:08.537 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:08.537 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:08.537 { 00:33:08.537 "params": { 00:33:08.537 "name": "Nvme$subsystem", 00:33:08.537 "trtype": "$TEST_TRANSPORT", 00:33:08.537 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:08.537 "adrfam": "ipv4", 00:33:08.537 "trsvcid": "$NVMF_PORT", 00:33:08.537 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:08.537 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:08.537 "hdgst": ${hdgst:-false}, 00:33:08.537 "ddgst": ${ddgst:-false} 00:33:08.537 }, 00:33:08.537 "method": "bdev_nvme_attach_controller" 00:33:08.537 } 00:33:08.537 EOF 00:33:08.537 )") 00:33:08.537 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:33:08.537 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:33:08.537 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=64329 00:33:08.537 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:33:08.537 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:08.537 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:08.537 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:08.537 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:08.537 { 00:33:08.537 "params": { 00:33:08.537 "name": "Nvme$subsystem", 00:33:08.537 "trtype": "$TEST_TRANSPORT", 00:33:08.537 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:08.537 "adrfam": "ipv4", 00:33:08.537 "trsvcid": "$NVMF_PORT", 00:33:08.537 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:08.537 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:08.537 "hdgst": ${hdgst:-false}, 00:33:08.537 "ddgst": ${ddgst:-false} 00:33:08.537 }, 00:33:08.537 "method": "bdev_nvme_attach_controller" 00:33:08.537 } 00:33:08.537 EOF 00:33:08.537 )") 00:33:08.537 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:08.537 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=64332 00:33:08.537 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:33:08.537 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:08.537 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:08.537 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:08.537 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:08.537 { 00:33:08.537 "params": { 00:33:08.537 "name": "Nvme$subsystem", 00:33:08.537 "trtype": "$TEST_TRANSPORT", 00:33:08.537 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:08.537 "adrfam": "ipv4", 00:33:08.537 "trsvcid": "$NVMF_PORT", 00:33:08.537 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:08.537 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:08.537 "hdgst": ${hdgst:-false}, 00:33:08.537 "ddgst": ${ddgst:-false} 00:33:08.537 }, 00:33:08.537 "method": "bdev_nvme_attach_controller" 00:33:08.537 } 00:33:08.537 EOF 00:33:08.537 )") 00:33:08.537 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:08.537 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:08.537 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:33:08.537 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:33:08.537 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:33:08.537 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:08.537 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:08.537 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:08.537 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:08.537 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:08.537 { 00:33:08.537 "params": { 00:33:08.537 "name": "Nvme$subsystem", 00:33:08.537 "trtype": "$TEST_TRANSPORT", 00:33:08.537 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:08.537 "adrfam": "ipv4", 00:33:08.537 "trsvcid": "$NVMF_PORT", 00:33:08.537 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:08.537 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:08.537 "hdgst": ${hdgst:-false}, 00:33:08.537 "ddgst": ${ddgst:-false} 00:33:08.537 }, 00:33:08.537 "method": "bdev_nvme_attach_controller" 00:33:08.537 } 00:33:08.537 EOF 00:33:08.537 )") 00:33:08.537 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:08.537 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:08.537 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:08.537 "params": { 00:33:08.537 "name": "Nvme1", 00:33:08.537 "trtype": "tcp", 00:33:08.537 "traddr": "10.0.0.3", 00:33:08.537 "adrfam": "ipv4", 00:33:08.538 "trsvcid": "4420", 00:33:08.538 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:08.538 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:08.538 "hdgst": false, 00:33:08.538 "ddgst": false 00:33:08.538 }, 00:33:08.538 "method": "bdev_nvme_attach_controller" 00:33:08.538 }' 00:33:08.538 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:08.538 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:08.538 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:08.538 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:08.538 "params": { 00:33:08.538 "name": "Nvme1", 00:33:08.538 "trtype": "tcp", 00:33:08.538 "traddr": "10.0.0.3", 00:33:08.538 "adrfam": "ipv4", 00:33:08.538 "trsvcid": "4420", 00:33:08.538 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:08.538 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:08.538 "hdgst": false, 00:33:08.538 "ddgst": false 00:33:08.538 }, 00:33:08.538 "method": "bdev_nvme_attach_controller" 00:33:08.538 }' 00:33:08.538 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:08.538 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:08.538 "params": { 00:33:08.538 "name": "Nvme1", 00:33:08.538 "trtype": "tcp", 00:33:08.538 "traddr": "10.0.0.3", 00:33:08.538 "adrfam": "ipv4", 00:33:08.538 "trsvcid": "4420", 00:33:08.538 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:08.538 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:08.538 "hdgst": false, 00:33:08.538 "ddgst": false 00:33:08.538 }, 00:33:08.538 "method": "bdev_nvme_attach_controller" 00:33:08.538 }' 00:33:08.538 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:08.538 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:08.538 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:08.538 "params": { 00:33:08.538 "name": "Nvme1", 00:33:08.538 "trtype": "tcp", 00:33:08.538 "traddr": "10.0.0.3", 00:33:08.538 "adrfam": "ipv4", 00:33:08.538 "trsvcid": "4420", 00:33:08.538 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:08.538 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:08.538 "hdgst": false, 00:33:08.538 "ddgst": false 00:33:08.538 }, 00:33:08.538 "method": "bdev_nvme_attach_controller" 00:33:08.538 }' 00:33:08.538 [2024-11-20 13:54:05.840621] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:33:08.538 [2024-11-20 13:54:05.840786] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:33:08.538 [2024-11-20 13:54:05.847841] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:33:08.538 [2024-11-20 13:54:05.847969] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:33:08.538 [2024-11-20 13:54:05.854072] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:33:08.538 [2024-11-20 13:54:05.854099] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:33:08.538 [2024-11-20 13:54:05.854149] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:33:08.538 [2024-11-20 13:54:05.854203] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:33:08.797 13:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 64325 00:33:08.797 [2024-11-20 13:54:06.037339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:08.797 [2024-11-20 13:54:06.085165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:33:08.797 [2024-11-20 13:54:06.097791] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:33:09.057 [2024-11-20 13:54:06.128775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:09.057 [2024-11-20 13:54:06.188656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:33:09.057 [2024-11-20 13:54:06.201540] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:33:09.057 [2024-11-20 13:54:06.228153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:09.057 [2024-11-20 13:54:06.290268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:33:09.057 [2024-11-20 13:54:06.302881] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:33:09.057 [2024-11-20 13:54:06.358813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:09.057 Running I/O for 1 seconds... 00:33:09.316 Running I/O for 1 seconds... 00:33:09.316 Running I/O for 1 seconds... 00:33:09.316 [2024-11-20 13:54:06.421092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:33:09.316 [2024-11-20 13:54:06.433658] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:33:09.316 Running I/O for 1 seconds... 00:33:10.252 9544.00 IOPS, 37.28 MiB/s 00:33:10.252 Latency(us) 00:33:10.252 [2024-11-20T13:54:07.575Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:10.252 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:33:10.252 Nvme1n1 : 1.01 9589.97 37.46 0.00 0.00 13286.88 7555.24 19803.89 00:33:10.252 [2024-11-20T13:54:07.575Z] =================================================================================================================== 00:33:10.252 [2024-11-20T13:54:07.575Z] Total : 9589.97 37.46 0.00 0.00 13286.88 7555.24 19803.89 00:33:10.252 186984.00 IOPS, 730.41 MiB/s 00:33:10.252 Latency(us) 00:33:10.252 [2024-11-20T13:54:07.575Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:10.252 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:33:10.252 Nvme1n1 : 1.00 186645.96 729.09 0.00 0.00 682.30 313.01 1810.11 00:33:10.252 [2024-11-20T13:54:07.575Z] =================================================================================================================== 00:33:10.252 [2024-11-20T13:54:07.575Z] Total : 186645.96 729.09 0.00 0.00 682.30 313.01 1810.11 00:33:10.252 8513.00 IOPS, 33.25 MiB/s 00:33:10.252 Latency(us) 00:33:10.252 [2024-11-20T13:54:07.575Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:10.252 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:33:10.252 Nvme1n1 : 1.01 8577.91 33.51 0.00 0.00 14856.38 7269.06 26099.93 00:33:10.252 [2024-11-20T13:54:07.575Z] =================================================================================================================== 00:33:10.252 [2024-11-20T13:54:07.575Z] Total : 8577.91 33.51 0.00 0.00 14856.38 7269.06 26099.93 00:33:10.252 9800.00 IOPS, 38.28 MiB/s 00:33:10.252 Latency(us) 00:33:10.252 [2024-11-20T13:54:07.575Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:10.252 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:33:10.252 Nvme1n1 : 1.01 9885.82 38.62 0.00 0.00 12903.90 5466.10 25527.56 00:33:10.252 [2024-11-20T13:54:07.575Z] =================================================================================================================== 00:33:10.252 [2024-11-20T13:54:07.575Z] Total : 9885.82 38.62 0.00 0.00 12903.90 5466.10 25527.56 00:33:10.510 13:54:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 64327 00:33:10.510 13:54:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 64329 00:33:10.510 13:54:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 64332 00:33:10.510 13:54:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:10.510 13:54:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.510 13:54:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:10.511 13:54:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.511 13:54:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:33:10.511 13:54:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:33:10.511 13:54:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:10.511 13:54:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:33:10.511 13:54:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:10.511 13:54:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:33:10.511 13:54:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:10.511 13:54:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:10.511 rmmod nvme_tcp 00:33:10.511 rmmod nvme_fabrics 00:33:10.511 rmmod nvme_keyring 00:33:10.770 13:54:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:10.770 13:54:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:33:10.770 13:54:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:33:10.770 13:54:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 64290 ']' 00:33:10.770 13:54:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 64290 00:33:10.770 13:54:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 64290 ']' 00:33:10.770 13:54:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 64290 00:33:10.770 13:54:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:33:10.770 13:54:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:10.770 13:54:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64290 00:33:10.770 killing process with pid 64290 00:33:10.770 13:54:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:10.770 13:54:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:10.770 13:54:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64290' 00:33:10.770 13:54:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 64290 00:33:10.770 13:54:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 64290 00:33:11.029 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:11.029 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:11.029 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:11.029 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:33:11.030 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:33:11.030 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:11.030 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:33:11.030 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:11.030 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:33:11.030 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:33:11.030 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:33:11.030 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:33:11.030 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:33:11.030 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:33:11.030 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:33:11.030 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:33:11.030 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:33:11.030 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:33:11.030 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:33:11.030 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:33:11.030 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:11.030 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:11.030 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:33:11.030 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:11.030 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:11.030 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:11.289 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:33:11.289 ************************************ 00:33:11.289 END TEST nvmf_bdev_io_wait 00:33:11.289 ************************************ 00:33:11.289 00:33:11.289 real 0m4.626s 00:33:11.289 user 0m18.043s 00:33:11.289 sys 0m2.417s 00:33:11.289 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:11.289 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:11.289 13:54:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:33:11.289 13:54:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:11.289 13:54:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:11.289 13:54:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:33:11.289 ************************************ 00:33:11.289 START TEST nvmf_queue_depth 00:33:11.289 ************************************ 00:33:11.289 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:33:11.289 * Looking for test storage... 00:33:11.290 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:33:11.290 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:11.290 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:33:11.290 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:11.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:11.549 --rc genhtml_branch_coverage=1 00:33:11.549 --rc genhtml_function_coverage=1 00:33:11.549 --rc genhtml_legend=1 00:33:11.549 --rc geninfo_all_blocks=1 00:33:11.549 --rc geninfo_unexecuted_blocks=1 00:33:11.549 00:33:11.549 ' 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:11.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:11.549 --rc genhtml_branch_coverage=1 00:33:11.549 --rc genhtml_function_coverage=1 00:33:11.549 --rc genhtml_legend=1 00:33:11.549 --rc geninfo_all_blocks=1 00:33:11.549 --rc geninfo_unexecuted_blocks=1 00:33:11.549 00:33:11.549 ' 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:11.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:11.549 --rc genhtml_branch_coverage=1 00:33:11.549 --rc genhtml_function_coverage=1 00:33:11.549 --rc genhtml_legend=1 00:33:11.549 --rc geninfo_all_blocks=1 00:33:11.549 --rc geninfo_unexecuted_blocks=1 00:33:11.549 00:33:11.549 ' 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:11.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:11.549 --rc genhtml_branch_coverage=1 00:33:11.549 --rc genhtml_function_coverage=1 00:33:11.549 --rc genhtml_legend=1 00:33:11.549 --rc geninfo_all_blocks=1 00:33:11.549 --rc geninfo_unexecuted_blocks=1 00:33:11.549 00:33:11.549 ' 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=105ec898-1662-46bd-85be-b241e399edb9 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:11.549 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:11.549 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:11.550 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:11.550 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:11.550 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:33:11.550 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:33:11.550 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:11.550 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:33:11.550 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:11.550 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:11.550 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:11.550 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:11.550 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:11.550 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:11.550 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:11.550 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:11.550 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:33:11.550 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:33:11.550 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:33:11.550 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:33:11.550 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:33:11.550 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:33:11.550 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:11.550 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:33:11.550 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:33:11.550 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:33:11.550 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:11.550 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:33:11.550 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:33:11.550 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:33:11.550 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:33:11.550 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:33:11.550 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:33:11.550 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:11.550 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:33:11.550 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:33:11.550 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:33:11.550 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:33:11.550 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:33:11.550 Cannot find device "nvmf_init_br" 00:33:11.550 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:33:11.550 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:33:11.550 Cannot find device "nvmf_init_br2" 00:33:11.550 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:33:11.550 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:33:11.550 Cannot find device "nvmf_tgt_br" 00:33:11.550 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:33:11.550 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:33:11.550 Cannot find device "nvmf_tgt_br2" 00:33:11.550 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:33:11.550 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:33:11.550 Cannot find device "nvmf_init_br" 00:33:11.550 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:33:11.550 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:33:11.550 Cannot find device "nvmf_init_br2" 00:33:11.550 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:33:11.550 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:33:11.550 Cannot find device "nvmf_tgt_br" 00:33:11.550 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:33:11.550 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:33:11.550 Cannot find device "nvmf_tgt_br2" 00:33:11.550 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:33:11.550 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:33:11.809 Cannot find device "nvmf_br" 00:33:11.809 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:33:11.809 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:33:11.809 Cannot find device "nvmf_init_if" 00:33:11.809 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:33:11.809 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:33:11.809 Cannot find device "nvmf_init_if2" 00:33:11.809 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:33:11.809 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:11.809 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:11.809 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:33:11.809 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:11.809 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:11.809 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:33:11.809 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:33:11.809 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:33:11.809 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:33:11.809 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:33:11.809 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:33:11.809 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:33:11.809 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:33:11.809 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:33:11.809 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:33:11.809 13:54:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:33:11.809 13:54:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:33:11.809 13:54:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:33:11.809 13:54:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:33:11.809 13:54:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:33:11.809 13:54:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:33:11.809 13:54:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:33:11.809 13:54:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:33:11.809 13:54:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:33:11.809 13:54:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:33:11.809 13:54:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:33:11.810 13:54:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:33:11.810 13:54:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:33:11.810 13:54:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:33:11.810 13:54:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:33:11.810 13:54:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:33:11.810 13:54:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:33:11.810 13:54:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:33:11.810 13:54:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:33:11.810 13:54:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:33:11.810 13:54:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:33:11.810 13:54:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:33:11.810 13:54:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:33:11.810 13:54:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:33:11.810 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:33:11.810 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.131 ms 00:33:11.810 00:33:11.810 --- 10.0.0.3 ping statistics --- 00:33:11.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:11.810 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:33:11.810 13:54:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:33:11.810 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:33:11.810 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.083 ms 00:33:11.810 00:33:11.810 --- 10.0.0.4 ping statistics --- 00:33:11.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:11.810 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:33:11.810 13:54:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:33:11.810 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:11.810 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:33:11.810 00:33:11.810 --- 10.0.0.1 ping statistics --- 00:33:11.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:11.810 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:33:11.810 13:54:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:33:11.810 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:11.810 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:33:11.810 00:33:11.810 --- 10.0.0.2 ping statistics --- 00:33:11.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:11.810 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:33:11.810 13:54:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:11.810 13:54:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:33:11.810 13:54:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:11.810 13:54:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:11.810 13:54:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:11.810 13:54:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:11.810 13:54:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:11.810 13:54:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:11.810 13:54:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:12.084 13:54:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:33:12.084 13:54:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:12.084 13:54:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:12.084 13:54:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:12.084 13:54:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:12.084 13:54:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=64618 00:33:12.084 13:54:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 64618 00:33:12.084 13:54:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 64618 ']' 00:33:12.084 13:54:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:12.084 13:54:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:12.084 13:54:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:12.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:12.084 13:54:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:12.084 13:54:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:12.084 [2024-11-20 13:54:09.199217] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:33:12.084 [2024-11-20 13:54:09.199409] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:12.084 [2024-11-20 13:54:09.359511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:12.344 [2024-11-20 13:54:09.422320] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:12.345 [2024-11-20 13:54:09.422460] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:12.345 [2024-11-20 13:54:09.422473] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:12.345 [2024-11-20 13:54:09.422480] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:12.345 [2024-11-20 13:54:09.422485] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:12.345 [2024-11-20 13:54:09.422894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:12.345 [2024-11-20 13:54:09.466598] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:33:12.934 13:54:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:12.934 13:54:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:33:12.934 13:54:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:12.934 13:54:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:12.934 13:54:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:12.934 13:54:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:12.934 13:54:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:12.934 13:54:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.934 13:54:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:12.934 [2024-11-20 13:54:10.192650] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:12.934 13:54:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.935 13:54:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:12.935 13:54:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.935 13:54:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:12.935 Malloc0 00:33:12.935 13:54:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.935 13:54:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:12.935 13:54:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.935 13:54:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:12.935 13:54:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.935 13:54:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:12.935 13:54:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.935 13:54:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:12.935 13:54:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.935 13:54:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:33:12.935 13:54:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.935 13:54:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:12.935 [2024-11-20 13:54:10.251323] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:33:13.194 13:54:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.194 13:54:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=64654 00:33:13.194 13:54:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:33:13.194 13:54:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:13.194 13:54:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 64654 /var/tmp/bdevperf.sock 00:33:13.194 13:54:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 64654 ']' 00:33:13.194 13:54:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:13.194 13:54:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:13.194 13:54:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:13.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:13.194 13:54:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:13.194 13:54:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:13.194 [2024-11-20 13:54:10.319049] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:33:13.194 [2024-11-20 13:54:10.319247] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64654 ] 00:33:13.194 [2024-11-20 13:54:10.474422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:13.454 [2024-11-20 13:54:10.541991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:13.454 [2024-11-20 13:54:10.617804] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:33:14.024 13:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:14.024 13:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:33:14.024 13:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:14.024 13:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.024 13:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:14.024 NVMe0n1 00:33:14.024 13:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.024 13:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:14.284 Running I/O for 10 seconds... 00:33:16.164 8320.00 IOPS, 32.50 MiB/s [2024-11-20T13:54:14.428Z] 9216.00 IOPS, 36.00 MiB/s [2024-11-20T13:54:15.808Z] 9565.00 IOPS, 37.36 MiB/s [2024-11-20T13:54:16.748Z] 9660.50 IOPS, 37.74 MiB/s [2024-11-20T13:54:17.689Z] 9752.80 IOPS, 38.10 MiB/s [2024-11-20T13:54:18.647Z] 9883.17 IOPS, 38.61 MiB/s [2024-11-20T13:54:19.586Z] 9837.14 IOPS, 38.43 MiB/s [2024-11-20T13:54:20.525Z] 9890.88 IOPS, 38.64 MiB/s [2024-11-20T13:54:21.464Z] 9964.56 IOPS, 38.92 MiB/s [2024-11-20T13:54:21.464Z] 10049.50 IOPS, 39.26 MiB/s 00:33:24.141 Latency(us) 00:33:24.141 [2024-11-20T13:54:21.464Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:24.141 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:33:24.141 Verification LBA range: start 0x0 length 0x4000 00:33:24.141 NVMe0n1 : 10.08 10071.01 39.34 0.00 0.00 101257.72 23009.15 79215.57 00:33:24.141 [2024-11-20T13:54:21.464Z] =================================================================================================================== 00:33:24.141 [2024-11-20T13:54:21.464Z] Total : 10071.01 39.34 0.00 0.00 101257.72 23009.15 79215.57 00:33:24.402 { 00:33:24.402 "results": [ 00:33:24.402 { 00:33:24.402 "job": "NVMe0n1", 00:33:24.402 "core_mask": "0x1", 00:33:24.402 "workload": "verify", 00:33:24.402 "status": "finished", 00:33:24.402 "verify_range": { 00:33:24.402 "start": 0, 00:33:24.402 "length": 16384 00:33:24.402 }, 00:33:24.402 "queue_depth": 1024, 00:33:24.402 "io_size": 4096, 00:33:24.402 "runtime": 10.078925, 00:33:24.402 "iops": 10071.014517917338, 00:33:24.402 "mibps": 39.3399004606146, 00:33:24.402 "io_failed": 0, 00:33:24.402 "io_timeout": 0, 00:33:24.402 "avg_latency_us": 101257.7226033781, 00:33:24.402 "min_latency_us": 23009.145851528385, 00:33:24.402 "max_latency_us": 79215.56681222707 00:33:24.402 } 00:33:24.402 ], 00:33:24.402 "core_count": 1 00:33:24.402 } 00:33:24.402 13:54:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 64654 00:33:24.402 13:54:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 64654 ']' 00:33:24.402 13:54:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 64654 00:33:24.402 13:54:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:33:24.402 13:54:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:24.402 13:54:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64654 00:33:24.402 killing process with pid 64654 00:33:24.402 Received shutdown signal, test time was about 10.000000 seconds 00:33:24.402 00:33:24.402 Latency(us) 00:33:24.402 [2024-11-20T13:54:21.725Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:24.402 [2024-11-20T13:54:21.725Z] =================================================================================================================== 00:33:24.402 [2024-11-20T13:54:21.725Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:24.402 13:54:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:24.402 13:54:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:24.402 13:54:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64654' 00:33:24.402 13:54:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 64654 00:33:24.402 13:54:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 64654 00:33:24.402 13:54:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:33:24.402 13:54:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:33:24.402 13:54:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:24.402 13:54:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:33:24.662 13:54:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:24.662 13:54:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:33:24.662 13:54:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:24.662 13:54:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:24.662 rmmod nvme_tcp 00:33:24.662 rmmod nvme_fabrics 00:33:24.662 rmmod nvme_keyring 00:33:24.662 13:54:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:24.662 13:54:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:33:24.662 13:54:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:33:24.662 13:54:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 64618 ']' 00:33:24.662 13:54:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 64618 00:33:24.662 13:54:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 64618 ']' 00:33:24.662 13:54:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 64618 00:33:24.662 13:54:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:33:24.662 13:54:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:24.662 13:54:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64618 00:33:24.662 killing process with pid 64618 00:33:24.662 13:54:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:24.662 13:54:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:24.662 13:54:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64618' 00:33:24.662 13:54:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 64618 00:33:24.662 13:54:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 64618 00:33:24.929 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:24.929 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:24.929 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:24.929 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:33:24.929 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:33:24.929 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:24.929 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:33:24.929 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:24.929 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:33:24.929 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:33:24.929 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:33:24.929 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:33:24.929 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:33:24.929 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:33:24.929 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:33:24.929 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:33:24.929 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:33:24.929 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:33:25.200 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:33:25.200 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:33:25.200 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:25.200 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:25.200 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:33:25.200 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:25.200 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:25.200 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:25.200 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:33:25.200 ************************************ 00:33:25.200 END TEST nvmf_queue_depth 00:33:25.200 ************************************ 00:33:25.200 00:33:25.200 real 0m13.942s 00:33:25.200 user 0m23.496s 00:33:25.200 sys 0m2.409s 00:33:25.200 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:25.200 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:25.200 13:54:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:33:25.200 13:54:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:25.200 13:54:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:25.200 13:54:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:33:25.200 ************************************ 00:33:25.200 START TEST nvmf_target_multipath 00:33:25.200 ************************************ 00:33:25.200 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:33:25.461 * Looking for test storage... 00:33:25.461 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:33:25.461 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:25.461 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:33:25.461 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:25.461 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:25.461 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:25.461 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:25.461 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:25.461 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:33:25.461 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:33:25.461 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:33:25.461 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:33:25.461 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:33:25.461 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:33:25.461 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:33:25.461 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:25.461 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:33:25.461 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:33:25.461 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:25.461 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:25.461 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:33:25.461 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:33:25.461 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:25.461 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:33:25.461 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:33:25.461 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:33:25.461 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:33:25.461 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:25.461 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:33:25.461 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:33:25.461 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:25.461 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:25.461 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:33:25.461 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:25.461 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:25.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:25.461 --rc genhtml_branch_coverage=1 00:33:25.461 --rc genhtml_function_coverage=1 00:33:25.461 --rc genhtml_legend=1 00:33:25.461 --rc geninfo_all_blocks=1 00:33:25.461 --rc geninfo_unexecuted_blocks=1 00:33:25.461 00:33:25.461 ' 00:33:25.461 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:25.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:25.461 --rc genhtml_branch_coverage=1 00:33:25.461 --rc genhtml_function_coverage=1 00:33:25.461 --rc genhtml_legend=1 00:33:25.461 --rc geninfo_all_blocks=1 00:33:25.461 --rc geninfo_unexecuted_blocks=1 00:33:25.461 00:33:25.461 ' 00:33:25.461 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:25.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:25.461 --rc genhtml_branch_coverage=1 00:33:25.461 --rc genhtml_function_coverage=1 00:33:25.461 --rc genhtml_legend=1 00:33:25.461 --rc geninfo_all_blocks=1 00:33:25.461 --rc geninfo_unexecuted_blocks=1 00:33:25.461 00:33:25.461 ' 00:33:25.461 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:25.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:25.461 --rc genhtml_branch_coverage=1 00:33:25.461 --rc genhtml_function_coverage=1 00:33:25.461 --rc genhtml_legend=1 00:33:25.461 --rc geninfo_all_blocks=1 00:33:25.461 --rc geninfo_unexecuted_blocks=1 00:33:25.461 00:33:25.461 ' 00:33:25.461 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:25.461 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:33:25.461 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:25.461 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:25.461 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:25.461 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:25.461 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:25.461 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:25.461 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:25.461 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:25.461 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:25.461 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:25.461 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:33:25.461 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=105ec898-1662-46bd-85be-b241e399edb9 00:33:25.461 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:25.461 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:25.461 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:33:25.461 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:25.461 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:25.461 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:33:25.461 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:25.461 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:25.461 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:25.462 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:25.462 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:25.462 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:25.462 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:33:25.462 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:25.462 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:33:25.462 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:25.462 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:25.462 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:25.462 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:25.462 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:25.462 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:25.462 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:25.462 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:25.462 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:25.462 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:25.462 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:25.462 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:25.462 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:33:25.462 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:25.462 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:33:25.462 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:25.462 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:25.462 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:25.462 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:25.462 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:25.462 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:25.462 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:25.462 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:25.462 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:33:25.462 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:33:25.462 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:33:25.462 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:33:25.462 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:33:25.462 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:33:25.462 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:25.462 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:33:25.462 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:33:25.462 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:33:25.462 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:25.462 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:33:25.462 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:33:25.462 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:33:25.462 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:33:25.462 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:33:25.462 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:33:25.462 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:25.462 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:33:25.462 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:33:25.462 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:33:25.462 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:33:25.462 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:33:25.462 Cannot find device "nvmf_init_br" 00:33:25.462 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:33:25.462 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:33:25.462 Cannot find device "nvmf_init_br2" 00:33:25.462 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:33:25.462 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:33:25.722 Cannot find device "nvmf_tgt_br" 00:33:25.722 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:33:25.722 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:33:25.722 Cannot find device "nvmf_tgt_br2" 00:33:25.722 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:33:25.722 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:33:25.722 Cannot find device "nvmf_init_br" 00:33:25.722 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:33:25.722 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:33:25.722 Cannot find device "nvmf_init_br2" 00:33:25.722 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:33:25.722 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:33:25.722 Cannot find device "nvmf_tgt_br" 00:33:25.722 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:33:25.722 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:33:25.722 Cannot find device "nvmf_tgt_br2" 00:33:25.722 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:33:25.722 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:33:25.722 Cannot find device "nvmf_br" 00:33:25.722 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:33:25.722 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:33:25.722 Cannot find device "nvmf_init_if" 00:33:25.722 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:33:25.722 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:33:25.722 Cannot find device "nvmf_init_if2" 00:33:25.722 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:33:25.722 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:25.722 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:25.722 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:33:25.722 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:25.722 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:25.722 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:33:25.722 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:33:25.722 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:33:25.722 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:33:25.722 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:33:25.722 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:33:25.722 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:33:25.722 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:33:25.722 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:33:25.722 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:33:25.722 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:33:25.722 13:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:33:25.722 13:54:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:33:25.722 13:54:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:33:25.722 13:54:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:33:25.722 13:54:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:33:25.722 13:54:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:33:25.722 13:54:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:33:25.722 13:54:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:33:25.722 13:54:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:33:25.983 13:54:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:33:25.983 13:54:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:33:25.983 13:54:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:33:25.983 13:54:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:33:25.983 13:54:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:33:25.983 13:54:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:33:25.983 13:54:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:33:25.983 13:54:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:33:25.983 13:54:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:33:25.983 13:54:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:33:25.983 13:54:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:33:25.983 13:54:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:33:25.983 13:54:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:33:25.983 13:54:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:33:25.983 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:33:25.983 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:33:25.983 00:33:25.983 --- 10.0.0.3 ping statistics --- 00:33:25.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:25.983 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:33:25.983 13:54:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:33:25.983 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:33:25.983 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:33:25.983 00:33:25.983 --- 10.0.0.4 ping statistics --- 00:33:25.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:25.983 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:33:25.983 13:54:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:33:25.983 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:25.983 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:33:25.983 00:33:25.983 --- 10.0.0.1 ping statistics --- 00:33:25.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:25.983 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:33:25.983 13:54:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:33:25.983 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:25.983 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.043 ms 00:33:25.983 00:33:25.983 --- 10.0.0.2 ping statistics --- 00:33:25.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:25.983 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:33:25.983 13:54:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:25.983 13:54:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:33:25.983 13:54:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:25.983 13:54:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:25.983 13:54:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:25.983 13:54:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:25.983 13:54:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:25.983 13:54:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:25.983 13:54:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:25.983 13:54:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:33:25.983 13:54:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:33:25.983 13:54:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:33:25.983 13:54:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:25.983 13:54:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:25.983 13:54:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:25.983 13:54:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:33:25.983 13:54:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=65033 00:33:25.983 13:54:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 65033 00:33:25.983 13:54:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 65033 ']' 00:33:25.983 13:54:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:25.983 13:54:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:25.983 13:54:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:25.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:25.983 13:54:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:25.983 13:54:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:25.983 [2024-11-20 13:54:23.245587] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:33:25.983 [2024-11-20 13:54:23.245726] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:26.243 [2024-11-20 13:54:23.401379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:26.243 [2024-11-20 13:54:23.472611] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:26.243 [2024-11-20 13:54:23.472669] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:26.243 [2024-11-20 13:54:23.472676] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:26.243 [2024-11-20 13:54:23.472682] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:26.243 [2024-11-20 13:54:23.472686] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:26.243 [2024-11-20 13:54:23.473599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:26.243 [2024-11-20 13:54:23.473686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:26.243 [2024-11-20 13:54:23.473863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:26.243 [2024-11-20 13:54:23.473865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:26.243 [2024-11-20 13:54:23.538652] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:33:26.811 13:54:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:26.811 13:54:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:33:26.811 13:54:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:26.811 13:54:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:26.811 13:54:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:27.069 13:54:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:27.070 13:54:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:27.070 [2024-11-20 13:54:24.382092] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:27.327 13:54:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:33:27.585 Malloc0 00:33:27.585 13:54:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:33:27.844 13:54:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:28.102 13:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:33:28.102 [2024-11-20 13:54:25.395754] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:33:28.102 13:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:33:28.360 [2024-11-20 13:54:25.639534] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:33:28.360 13:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid=105ec898-1662-46bd-85be-b241e399edb9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:33:28.619 13:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid=105ec898-1662-46bd-85be-b241e399edb9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:33:28.877 13:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:33:28.877 13:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:33:28.877 13:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:33:28.877 13:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:33:28.877 13:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:33:30.780 13:54:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:33:30.780 13:54:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:33:30.780 13:54:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:33:30.780 13:54:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:33:30.780 13:54:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:33:30.780 13:54:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:33:30.780 13:54:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:33:30.780 13:54:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:33:30.780 13:54:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:33:30.780 13:54:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:33:30.780 13:54:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:33:30.780 13:54:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:33:30.780 13:54:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:33:30.780 13:54:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:33:30.780 13:54:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:33:30.780 13:54:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:33:30.780 13:54:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:33:30.780 13:54:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:33:30.780 13:54:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:33:30.780 13:54:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:33:30.780 13:54:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:33:30.780 13:54:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:33:30.780 13:54:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:33:30.780 13:54:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:33:30.780 13:54:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:33:30.780 13:54:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:33:30.780 13:54:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:33:30.780 13:54:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:33:30.780 13:54:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:33:30.780 13:54:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:33:30.780 13:54:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:33:30.780 13:54:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:33:30.780 13:54:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=65118 00:33:30.780 13:54:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:33:30.780 13:54:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:33:30.780 [global] 00:33:30.780 thread=1 00:33:30.780 invalidate=1 00:33:30.780 rw=randrw 00:33:30.780 time_based=1 00:33:30.780 runtime=6 00:33:30.780 ioengine=libaio 00:33:30.780 direct=1 00:33:30.780 bs=4096 00:33:30.780 iodepth=128 00:33:30.780 norandommap=0 00:33:30.780 numjobs=1 00:33:30.780 00:33:30.780 verify_dump=1 00:33:30.780 verify_backlog=512 00:33:30.780 verify_state_save=0 00:33:30.780 do_verify=1 00:33:30.780 verify=crc32c-intel 00:33:30.780 [job0] 00:33:30.780 filename=/dev/nvme0n1 00:33:30.780 Could not set queue depth (nvme0n1) 00:33:31.039 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:31.039 fio-3.35 00:33:31.039 Starting 1 thread 00:33:31.973 13:54:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:33:31.973 13:54:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:33:32.231 13:54:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:33:32.231 13:54:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:33:32.231 13:54:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:33:32.231 13:54:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:33:32.231 13:54:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:33:32.231 13:54:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:33:32.231 13:54:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:33:32.231 13:54:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:33:32.231 13:54:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:33:32.231 13:54:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:33:32.231 13:54:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:33:32.232 13:54:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:33:32.232 13:54:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:33:32.489 13:54:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:33:32.747 13:54:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:33:32.747 13:54:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:33:32.748 13:54:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:33:32.748 13:54:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:33:32.748 13:54:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:33:32.748 13:54:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:33:32.748 13:54:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:33:32.748 13:54:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:33:32.748 13:54:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:33:32.748 13:54:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:33:32.748 13:54:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:33:32.748 13:54:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:33:32.748 13:54:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 65118 00:33:38.014 00:33:38.014 job0: (groupid=0, jobs=1): err= 0: pid=65145: Wed Nov 20 13:54:34 2024 00:33:38.014 read: IOPS=9422, BW=36.8MiB/s (38.6MB/s)(221MiB/6008msec) 00:33:38.014 slat (usec): min=4, max=6118, avg=58.97, stdev=227.87 00:33:38.014 clat (usec): min=1167, max=20465, avg=9194.63, stdev=2076.41 00:33:38.014 lat (usec): min=1205, max=20485, avg=9253.60, stdev=2084.84 00:33:38.014 clat percentiles (usec): 00:33:38.014 | 1.00th=[ 4686], 5.00th=[ 6325], 10.00th=[ 6980], 20.00th=[ 7635], 00:33:38.014 | 30.00th=[ 7963], 40.00th=[ 8455], 50.00th=[ 8979], 60.00th=[ 9634], 00:33:38.014 | 70.00th=[10159], 80.00th=[10683], 90.00th=[11469], 95.00th=[12780], 00:33:38.014 | 99.00th=[15795], 99.50th=[16450], 99.90th=[18220], 99.95th=[18744], 00:33:38.014 | 99.99th=[20317] 00:33:38.014 bw ( KiB/s): min= 9656, max=25176, per=51.25%, avg=19317.09, stdev=4230.04, samples=11 00:33:38.014 iops : min= 2414, max= 6294, avg=4829.27, stdev=1057.51, samples=11 00:33:38.014 write: IOPS=5481, BW=21.4MiB/s (22.5MB/s)(115MiB/5347msec); 0 zone resets 00:33:38.014 slat (usec): min=12, max=3980, avg=77.11, stdev=158.45 00:33:38.014 clat (usec): min=789, max=20123, avg=8233.99, stdev=1914.62 00:33:38.014 lat (usec): min=877, max=20165, avg=8311.09, stdev=1924.40 00:33:38.014 clat percentiles (usec): 00:33:38.014 | 1.00th=[ 4015], 5.00th=[ 5014], 10.00th=[ 5735], 20.00th=[ 6521], 00:33:38.014 | 30.00th=[ 7177], 40.00th=[ 7898], 50.00th=[ 8455], 60.00th=[ 8979], 00:33:38.014 | 70.00th=[ 9372], 80.00th=[ 9765], 90.00th=[10290], 95.00th=[10683], 00:33:38.014 | 99.00th=[13566], 99.50th=[14877], 99.90th=[18220], 99.95th=[19268], 00:33:38.014 | 99.99th=[20055] 00:33:38.014 bw ( KiB/s): min= 9960, max=24488, per=88.15%, avg=19330.18, stdev=4052.46, samples=11 00:33:38.014 iops : min= 2490, max= 6122, avg=4832.55, stdev=1013.11, samples=11 00:33:38.014 lat (usec) : 1000=0.01% 00:33:38.014 lat (msec) : 2=0.11%, 4=0.44%, 10=73.33%, 20=26.11%, 50=0.01% 00:33:38.014 cpu : usr=5.66%, sys=28.18%, ctx=5028, majf=0, minf=114 00:33:38.014 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:33:38.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.014 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:38.014 issued rwts: total=56610,29312,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:38.014 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:38.014 00:33:38.014 Run status group 0 (all jobs): 00:33:38.015 READ: bw=36.8MiB/s (38.6MB/s), 36.8MiB/s-36.8MiB/s (38.6MB/s-38.6MB/s), io=221MiB (232MB), run=6008-6008msec 00:33:38.015 WRITE: bw=21.4MiB/s (22.5MB/s), 21.4MiB/s-21.4MiB/s (22.5MB/s-22.5MB/s), io=115MiB (120MB), run=5347-5347msec 00:33:38.015 00:33:38.015 Disk stats (read/write): 00:33:38.015 nvme0n1: ios=56157/28649, merge=0/0, ticks=484597/215083, in_queue=699680, util=98.71% 00:33:38.015 13:54:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:33:38.015 13:54:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:33:38.015 13:54:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:33:38.015 13:54:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:33:38.015 13:54:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:33:38.015 13:54:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:33:38.015 13:54:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:33:38.015 13:54:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:33:38.015 13:54:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:33:38.015 13:54:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:33:38.015 13:54:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:33:38.015 13:54:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:33:38.015 13:54:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:33:38.015 13:54:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:33:38.015 13:54:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:33:38.015 13:54:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=65225 00:33:38.015 13:54:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:33:38.015 13:54:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:33:38.015 [global] 00:33:38.015 thread=1 00:33:38.015 invalidate=1 00:33:38.015 rw=randrw 00:33:38.015 time_based=1 00:33:38.015 runtime=6 00:33:38.015 ioengine=libaio 00:33:38.015 direct=1 00:33:38.015 bs=4096 00:33:38.015 iodepth=128 00:33:38.015 norandommap=0 00:33:38.015 numjobs=1 00:33:38.015 00:33:38.015 verify_dump=1 00:33:38.015 verify_backlog=512 00:33:38.015 verify_state_save=0 00:33:38.015 do_verify=1 00:33:38.015 verify=crc32c-intel 00:33:38.015 [job0] 00:33:38.015 filename=/dev/nvme0n1 00:33:38.015 Could not set queue depth (nvme0n1) 00:33:38.015 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:38.015 fio-3.35 00:33:38.015 Starting 1 thread 00:33:38.964 13:54:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:33:38.964 13:54:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:33:39.224 13:54:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:33:39.224 13:54:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:33:39.224 13:54:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:33:39.224 13:54:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:33:39.224 13:54:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:33:39.224 13:54:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:33:39.224 13:54:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:33:39.224 13:54:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:33:39.224 13:54:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:33:39.224 13:54:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:33:39.224 13:54:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:33:39.224 13:54:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:33:39.224 13:54:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:33:39.483 13:54:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:33:39.742 13:54:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:33:39.742 13:54:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:33:39.742 13:54:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:33:39.742 13:54:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:33:39.742 13:54:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:33:39.742 13:54:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:33:39.742 13:54:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:33:39.742 13:54:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:33:39.742 13:54:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:33:39.742 13:54:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:33:39.742 13:54:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:33:39.742 13:54:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:33:39.742 13:54:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 65225 00:33:45.014 00:33:45.014 job0: (groupid=0, jobs=1): err= 0: pid=65251: Wed Nov 20 13:54:41 2024 00:33:45.014 read: IOPS=8483, BW=33.1MiB/s (34.7MB/s)(199MiB/6002msec) 00:33:45.014 slat (usec): min=4, max=6994, avg=60.36, stdev=247.55 00:33:45.014 clat (usec): min=295, max=25864, avg=10432.43, stdev=4260.13 00:33:45.014 lat (usec): min=315, max=25875, avg=10492.78, stdev=4268.59 00:33:45.014 clat percentiles (usec): 00:33:45.014 | 1.00th=[ 635], 5.00th=[ 1270], 10.00th=[ 4490], 20.00th=[ 8586], 00:33:45.014 | 30.00th=[ 9896], 40.00th=[10421], 50.00th=[10683], 60.00th=[10814], 00:33:45.014 | 70.00th=[11338], 80.00th=[11994], 90.00th=[15926], 95.00th=[18220], 00:33:45.014 | 99.00th=[22676], 99.50th=[23725], 99.90th=[24773], 99.95th=[25035], 00:33:45.014 | 99.99th=[25560] 00:33:45.014 bw ( KiB/s): min=12528, max=23520, per=52.20%, avg=17712.73, stdev=3380.67, samples=11 00:33:45.014 iops : min= 3132, max= 5880, avg=4428.18, stdev=845.17, samples=11 00:33:45.014 write: IOPS=4803, BW=18.8MiB/s (19.7MB/s)(101MiB/5376msec); 0 zone resets 00:33:45.014 slat (usec): min=10, max=3249, avg=76.68, stdev=158.54 00:33:45.014 clat (usec): min=257, max=23371, avg=8919.48, stdev=3740.47 00:33:45.014 lat (usec): min=300, max=23410, avg=8996.16, stdev=3748.50 00:33:45.014 clat percentiles (usec): 00:33:45.014 | 1.00th=[ 553], 5.00th=[ 1074], 10.00th=[ 3458], 20.00th=[ 6128], 00:33:45.014 | 30.00th=[ 8225], 40.00th=[ 9241], 50.00th=[ 9634], 60.00th=[10028], 00:33:45.014 | 70.00th=[10290], 80.00th=[10683], 90.00th=[11863], 95.00th=[15795], 00:33:45.014 | 99.00th=[18744], 99.50th=[19530], 99.90th=[22152], 99.95th=[22676], 00:33:45.014 | 99.99th=[22938] 00:33:45.014 bw ( KiB/s): min=12512, max=24576, per=91.93%, avg=17661.82, stdev=3529.77, samples=11 00:33:45.014 iops : min= 3128, max= 6144, avg=4415.45, stdev=882.44, samples=11 00:33:45.014 lat (usec) : 500=0.50%, 750=1.42%, 1000=1.89% 00:33:45.014 lat (msec) : 2=3.61%, 4=2.69%, 10=31.54%, 20=56.47%, 50=1.88% 00:33:45.014 cpu : usr=5.28%, sys=26.92%, ctx=5939, majf=0, minf=54 00:33:45.014 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:33:45.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:45.015 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:45.015 issued rwts: total=50917,25821,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:45.015 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:45.015 00:33:45.015 Run status group 0 (all jobs): 00:33:45.015 READ: bw=33.1MiB/s (34.7MB/s), 33.1MiB/s-33.1MiB/s (34.7MB/s-34.7MB/s), io=199MiB (209MB), run=6002-6002msec 00:33:45.015 WRITE: bw=18.8MiB/s (19.7MB/s), 18.8MiB/s-18.8MiB/s (19.7MB/s-19.7MB/s), io=101MiB (106MB), run=5376-5376msec 00:33:45.015 00:33:45.015 Disk stats (read/write): 00:33:45.015 nvme0n1: ios=50366/25317, merge=0/0, ticks=499002/209630, in_queue=708632, util=98.65% 00:33:45.015 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:45.015 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:33:45.015 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:33:45.015 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:33:45.015 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:45.015 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:33:45.015 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:45.015 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:33:45.015 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:33:45.015 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:45.015 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:33:45.015 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:33:45.015 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:33:45.015 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:33:45.015 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:45.015 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:33:45.015 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:45.015 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:33:45.015 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:45.015 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:45.015 rmmod nvme_tcp 00:33:45.015 rmmod nvme_fabrics 00:33:45.015 rmmod nvme_keyring 00:33:45.015 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:45.015 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:33:45.015 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:33:45.015 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 65033 ']' 00:33:45.015 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 65033 00:33:45.015 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 65033 ']' 00:33:45.015 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 65033 00:33:45.015 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:33:45.015 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:45.015 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65033 00:33:45.015 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:45.015 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:45.015 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65033' 00:33:45.015 killing process with pid 65033 00:33:45.015 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 65033 00:33:45.015 13:54:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 65033 00:33:45.015 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:45.015 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:45.015 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:45.015 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:33:45.015 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:33:45.015 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:45.015 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:33:45.015 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:45.015 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:33:45.015 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:33:45.015 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:33:45.015 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:33:45.015 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:33:45.015 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:33:45.015 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:33:45.015 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:33:45.015 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:33:45.015 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:33:45.015 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:33:45.015 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:33:45.015 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:45.015 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:45.274 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:33:45.274 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:45.274 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:45.274 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:45.274 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:33:45.274 00:33:45.274 real 0m19.937s 00:33:45.274 user 1m14.800s 00:33:45.274 sys 0m8.842s 00:33:45.274 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:45.274 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:45.274 ************************************ 00:33:45.274 END TEST nvmf_target_multipath 00:33:45.274 ************************************ 00:33:45.274 13:54:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:33:45.274 13:54:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:45.274 13:54:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:45.274 13:54:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:33:45.274 ************************************ 00:33:45.274 START TEST nvmf_zcopy 00:33:45.274 ************************************ 00:33:45.274 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:33:45.274 * Looking for test storage... 00:33:45.535 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:33:45.535 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:45.535 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:45.535 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:33:45.535 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:45.535 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:45.535 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:45.535 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:45.535 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:33:45.535 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:33:45.535 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:33:45.535 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:33:45.535 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:33:45.535 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:33:45.535 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:33:45.535 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:45.535 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:33:45.535 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:33:45.535 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:45.535 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:45.535 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:33:45.535 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:33:45.535 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:45.535 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:33:45.535 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:33:45.535 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:33:45.535 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:33:45.535 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:45.535 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:33:45.535 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:33:45.535 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:45.535 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:45.535 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:33:45.535 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:45.535 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:45.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:45.535 --rc genhtml_branch_coverage=1 00:33:45.535 --rc genhtml_function_coverage=1 00:33:45.535 --rc genhtml_legend=1 00:33:45.535 --rc geninfo_all_blocks=1 00:33:45.535 --rc geninfo_unexecuted_blocks=1 00:33:45.535 00:33:45.535 ' 00:33:45.535 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:45.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:45.535 --rc genhtml_branch_coverage=1 00:33:45.535 --rc genhtml_function_coverage=1 00:33:45.535 --rc genhtml_legend=1 00:33:45.535 --rc geninfo_all_blocks=1 00:33:45.535 --rc geninfo_unexecuted_blocks=1 00:33:45.535 00:33:45.535 ' 00:33:45.535 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:45.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:45.535 --rc genhtml_branch_coverage=1 00:33:45.535 --rc genhtml_function_coverage=1 00:33:45.535 --rc genhtml_legend=1 00:33:45.535 --rc geninfo_all_blocks=1 00:33:45.535 --rc geninfo_unexecuted_blocks=1 00:33:45.535 00:33:45.535 ' 00:33:45.535 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:45.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:45.535 --rc genhtml_branch_coverage=1 00:33:45.535 --rc genhtml_function_coverage=1 00:33:45.535 --rc genhtml_legend=1 00:33:45.535 --rc geninfo_all_blocks=1 00:33:45.535 --rc geninfo_unexecuted_blocks=1 00:33:45.535 00:33:45.535 ' 00:33:45.535 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:45.535 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:33:45.535 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:45.535 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:45.535 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:45.535 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:45.535 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:45.535 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:45.535 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:45.535 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:45.535 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:45.535 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:45.535 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:33:45.535 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=105ec898-1662-46bd-85be-b241e399edb9 00:33:45.535 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:45.535 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:45.535 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:33:45.535 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:45.535 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:45.536 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:33:45.536 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:45.536 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:45.536 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:45.536 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.536 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.536 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.536 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:33:45.536 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.536 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:33:45.536 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:45.536 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:45.536 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:45.536 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:45.536 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:45.536 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:45.536 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:45.536 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:45.536 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:45.536 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:45.536 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:33:45.536 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:45.536 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:45.536 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:45.536 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:45.536 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:45.536 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:45.536 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:45.536 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:45.536 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:33:45.536 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:33:45.536 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:33:45.536 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:33:45.536 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:33:45.536 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:33:45.536 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:45.536 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:33:45.536 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:33:45.536 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:33:45.536 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:45.536 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:33:45.536 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:33:45.536 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:33:45.536 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:33:45.536 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:33:45.536 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:33:45.536 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:45.536 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:33:45.536 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:33:45.536 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:33:45.536 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:33:45.536 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:33:45.536 Cannot find device "nvmf_init_br" 00:33:45.536 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:33:45.536 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:33:45.536 Cannot find device "nvmf_init_br2" 00:33:45.536 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:33:45.536 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:33:45.536 Cannot find device "nvmf_tgt_br" 00:33:45.536 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:33:45.536 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:33:45.536 Cannot find device "nvmf_tgt_br2" 00:33:45.536 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:33:45.536 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:33:45.796 Cannot find device "nvmf_init_br" 00:33:45.796 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:33:45.796 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:33:45.796 Cannot find device "nvmf_init_br2" 00:33:45.796 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:33:45.796 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:33:45.796 Cannot find device "nvmf_tgt_br" 00:33:45.796 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:33:45.796 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:33:45.796 Cannot find device "nvmf_tgt_br2" 00:33:45.796 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:33:45.796 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:33:45.796 Cannot find device "nvmf_br" 00:33:45.796 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:33:45.796 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:33:45.796 Cannot find device "nvmf_init_if" 00:33:45.796 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:33:45.796 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:33:45.796 Cannot find device "nvmf_init_if2" 00:33:45.796 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:33:45.796 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:45.796 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:45.796 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:33:45.796 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:45.796 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:45.796 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:33:45.796 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:33:45.796 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:33:45.796 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:33:45.796 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:33:45.796 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:33:45.796 13:54:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:33:45.796 13:54:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:33:45.796 13:54:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:33:45.796 13:54:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:33:45.796 13:54:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:33:45.796 13:54:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:33:45.796 13:54:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:33:45.796 13:54:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:33:45.796 13:54:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:33:45.796 13:54:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:33:45.796 13:54:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:33:45.796 13:54:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:33:45.796 13:54:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:33:45.796 13:54:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:33:45.796 13:54:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:33:45.796 13:54:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:33:45.796 13:54:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:33:45.796 13:54:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:33:45.796 13:54:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:33:45.796 13:54:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:33:45.796 13:54:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:33:46.055 13:54:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:33:46.055 13:54:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:33:46.055 13:54:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:33:46.055 13:54:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:33:46.055 13:54:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:33:46.055 13:54:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:33:46.055 13:54:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:33:46.055 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:33:46.055 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:33:46.055 00:33:46.055 --- 10.0.0.3 ping statistics --- 00:33:46.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:46.055 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:33:46.055 13:54:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:33:46.055 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:33:46.055 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.036 ms 00:33:46.055 00:33:46.055 --- 10.0.0.4 ping statistics --- 00:33:46.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:46.055 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:33:46.055 13:54:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:33:46.055 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:46.056 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:33:46.056 00:33:46.056 --- 10.0.0.1 ping statistics --- 00:33:46.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:46.056 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:33:46.056 13:54:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:33:46.056 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:46.056 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.104 ms 00:33:46.056 00:33:46.056 --- 10.0.0.2 ping statistics --- 00:33:46.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:46.056 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:33:46.056 13:54:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:46.056 13:54:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:33:46.056 13:54:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:46.056 13:54:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:46.056 13:54:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:46.056 13:54:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:46.056 13:54:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:46.056 13:54:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:46.056 13:54:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:46.056 13:54:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:33:46.056 13:54:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:46.056 13:54:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:46.056 13:54:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:46.056 13:54:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=65551 00:33:46.056 13:54:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:46.056 13:54:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 65551 00:33:46.056 13:54:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 65551 ']' 00:33:46.056 13:54:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:46.056 13:54:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:46.056 13:54:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:46.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:46.056 13:54:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:46.056 13:54:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:46.056 [2024-11-20 13:54:43.249790] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:33:46.056 [2024-11-20 13:54:43.249861] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:46.315 [2024-11-20 13:54:43.398177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:46.315 [2024-11-20 13:54:43.466008] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:46.315 [2024-11-20 13:54:43.466068] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:46.315 [2024-11-20 13:54:43.466075] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:46.315 [2024-11-20 13:54:43.466080] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:46.315 [2024-11-20 13:54:43.466085] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:46.315 [2024-11-20 13:54:43.466445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:46.315 [2024-11-20 13:54:43.531117] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:33:46.883 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:46.883 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:33:46.883 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:46.883 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:46.883 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:46.883 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:46.883 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:33:46.883 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:33:46.883 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.883 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:46.883 [2024-11-20 13:54:44.178357] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:46.883 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.883 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:46.884 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.884 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:46.884 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.884 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:33:46.884 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.884 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:46.884 [2024-11-20 13:54:44.202419] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:33:47.144 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.144 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:33:47.144 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.144 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:47.144 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.144 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:33:47.144 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.144 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:47.144 malloc0 00:33:47.144 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.144 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:33:47.144 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.144 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:47.144 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.144 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:33:47.144 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:33:47.144 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:33:47.144 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:33:47.144 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:47.144 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:47.144 { 00:33:47.144 "params": { 00:33:47.144 "name": "Nvme$subsystem", 00:33:47.144 "trtype": "$TEST_TRANSPORT", 00:33:47.144 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:47.144 "adrfam": "ipv4", 00:33:47.144 "trsvcid": "$NVMF_PORT", 00:33:47.144 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:47.144 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:47.144 "hdgst": ${hdgst:-false}, 00:33:47.144 "ddgst": ${ddgst:-false} 00:33:47.144 }, 00:33:47.144 "method": "bdev_nvme_attach_controller" 00:33:47.144 } 00:33:47.144 EOF 00:33:47.144 )") 00:33:47.144 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:33:47.144 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:33:47.144 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:33:47.144 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:47.144 "params": { 00:33:47.144 "name": "Nvme1", 00:33:47.144 "trtype": "tcp", 00:33:47.144 "traddr": "10.0.0.3", 00:33:47.144 "adrfam": "ipv4", 00:33:47.144 "trsvcid": "4420", 00:33:47.144 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:47.144 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:47.144 "hdgst": false, 00:33:47.144 "ddgst": false 00:33:47.144 }, 00:33:47.144 "method": "bdev_nvme_attach_controller" 00:33:47.144 }' 00:33:47.144 [2024-11-20 13:54:44.306867] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:33:47.144 [2024-11-20 13:54:44.307029] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65584 ] 00:33:47.144 [2024-11-20 13:54:44.454239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:47.403 [2024-11-20 13:54:44.534004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:47.403 [2024-11-20 13:54:44.599148] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:33:47.403 Running I/O for 10 seconds... 00:33:49.719 7488.00 IOPS, 58.50 MiB/s [2024-11-20T13:54:47.981Z] 7646.00 IOPS, 59.73 MiB/s [2024-11-20T13:54:48.917Z] 7498.00 IOPS, 58.58 MiB/s [2024-11-20T13:54:49.853Z] 7445.50 IOPS, 58.17 MiB/s [2024-11-20T13:54:50.791Z] 7456.60 IOPS, 58.25 MiB/s [2024-11-20T13:54:51.727Z] 7403.83 IOPS, 57.84 MiB/s [2024-11-20T13:54:53.104Z] 7447.86 IOPS, 58.19 MiB/s [2024-11-20T13:54:54.041Z] 7448.88 IOPS, 58.19 MiB/s [2024-11-20T13:54:54.979Z] 7431.78 IOPS, 58.06 MiB/s [2024-11-20T13:54:54.979Z] 7358.60 IOPS, 57.49 MiB/s 00:33:57.656 Latency(us) 00:33:57.656 [2024-11-20T13:54:54.979Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:57.656 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:33:57.656 Verification LBA range: start 0x0 length 0x1000 00:33:57.656 Nvme1n1 : 10.01 7360.95 57.51 0.00 0.00 17338.86 2747.36 64105.08 00:33:57.656 [2024-11-20T13:54:54.979Z] =================================================================================================================== 00:33:57.657 [2024-11-20T13:54:54.980Z] Total : 7360.95 57.51 0.00 0.00 17338.86 2747.36 64105.08 00:33:57.657 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:33:57.657 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=65707 00:33:57.657 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:33:57.657 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:33:57.657 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:57.657 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:33:57.657 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:33:57.657 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:57.657 [2024-11-20 13:54:54.921238] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.657 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:57.657 { 00:33:57.657 "params": { 00:33:57.657 "name": "Nvme$subsystem", 00:33:57.657 "trtype": "$TEST_TRANSPORT", 00:33:57.657 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:57.657 "adrfam": "ipv4", 00:33:57.657 "trsvcid": "$NVMF_PORT", 00:33:57.657 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:57.657 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:57.657 "hdgst": ${hdgst:-false}, 00:33:57.657 "ddgst": ${ddgst:-false} 00:33:57.657 }, 00:33:57.657 "method": "bdev_nvme_attach_controller" 00:33:57.657 } 00:33:57.657 EOF 00:33:57.657 )") 00:33:57.657 [2024-11-20 13:54:54.921292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.657 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:33:57.657 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:33:57.657 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:33:57.657 13:54:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:57.657 "params": { 00:33:57.657 "name": "Nvme1", 00:33:57.657 "trtype": "tcp", 00:33:57.657 "traddr": "10.0.0.3", 00:33:57.657 "adrfam": "ipv4", 00:33:57.657 "trsvcid": "4420", 00:33:57.657 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:57.657 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:57.657 "hdgst": false, 00:33:57.657 "ddgst": false 00:33:57.657 }, 00:33:57.657 "method": "bdev_nvme_attach_controller" 00:33:57.657 }' 00:33:57.657 [2024-11-20 13:54:54.933155] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.657 [2024-11-20 13:54:54.933187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.657 [2024-11-20 13:54:54.945117] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.657 [2024-11-20 13:54:54.945142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.657 [2024-11-20 13:54:54.953101] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.657 [2024-11-20 13:54:54.953209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.657 [2024-11-20 13:54:54.957587] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:33:57.657 [2024-11-20 13:54:54.957669] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65707 ] 00:33:57.657 [2024-11-20 13:54:54.961098] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.657 [2024-11-20 13:54:54.961181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.657 [2024-11-20 13:54:54.973083] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.657 [2024-11-20 13:54:54.973151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.916 [2024-11-20 13:54:54.981060] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.916 [2024-11-20 13:54:54.981118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.916 [2024-11-20 13:54:54.989043] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.916 [2024-11-20 13:54:54.989092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.916 [2024-11-20 13:54:54.997022] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.916 [2024-11-20 13:54:54.997066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.916 [2024-11-20 13:54:55.005023] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.916 [2024-11-20 13:54:55.005068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.916 [2024-11-20 13:54:55.017012] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.916 [2024-11-20 13:54:55.017061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.916 [2024-11-20 13:54:55.025000] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.916 [2024-11-20 13:54:55.025049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.916 [2024-11-20 13:54:55.032998] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.916 [2024-11-20 13:54:55.033047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.916 [2024-11-20 13:54:55.040958] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.916 [2024-11-20 13:54:55.041000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.916 [2024-11-20 13:54:55.048965] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.916 [2024-11-20 13:54:55.049011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.916 [2024-11-20 13:54:55.056959] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.916 [2024-11-20 13:54:55.057026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.916 [2024-11-20 13:54:55.064950] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.916 [2024-11-20 13:54:55.064997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.916 [2024-11-20 13:54:55.072919] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.916 [2024-11-20 13:54:55.072968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.916 [2024-11-20 13:54:55.084898] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.916 [2024-11-20 13:54:55.084943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.916 [2024-11-20 13:54:55.096877] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.916 [2024-11-20 13:54:55.096922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.916 [2024-11-20 13:54:55.101605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:57.916 [2024-11-20 13:54:55.108858] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.916 [2024-11-20 13:54:55.108914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.916 [2024-11-20 13:54:55.120836] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.916 [2024-11-20 13:54:55.120883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.916 [2024-11-20 13:54:55.132822] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.916 [2024-11-20 13:54:55.132889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.916 [2024-11-20 13:54:55.144801] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.916 [2024-11-20 13:54:55.144844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.916 [2024-11-20 13:54:55.156798] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.916 [2024-11-20 13:54:55.156853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.916 [2024-11-20 13:54:55.168790] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.916 [2024-11-20 13:54:55.168837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.916 [2024-11-20 13:54:55.175740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:57.916 [2024-11-20 13:54:55.180770] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.916 [2024-11-20 13:54:55.180831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.916 [2024-11-20 13:54:55.192750] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.916 [2024-11-20 13:54:55.192820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.916 [2024-11-20 13:54:55.204725] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.916 [2024-11-20 13:54:55.204787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.916 [2024-11-20 13:54:55.216688] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.916 [2024-11-20 13:54:55.216759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.916 [2024-11-20 13:54:55.228669] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.916 [2024-11-20 13:54:55.228730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.916 [2024-11-20 13:54:55.231071] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:33:58.175 [2024-11-20 13:54:55.240649] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.175 [2024-11-20 13:54:55.240743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.175 [2024-11-20 13:54:55.252626] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.175 [2024-11-20 13:54:55.252676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.175 [2024-11-20 13:54:55.264636] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.175 [2024-11-20 13:54:55.264728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.175 [2024-11-20 13:54:55.276592] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.175 [2024-11-20 13:54:55.276656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.175 [2024-11-20 13:54:55.288576] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.175 [2024-11-20 13:54:55.288634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.175 [2024-11-20 13:54:55.300562] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.175 [2024-11-20 13:54:55.300619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.175 [2024-11-20 13:54:55.312539] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.175 [2024-11-20 13:54:55.312595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.175 [2024-11-20 13:54:55.324521] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.175 [2024-11-20 13:54:55.324575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.175 [2024-11-20 13:54:55.336517] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.175 [2024-11-20 13:54:55.336579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.175 Running I/O for 5 seconds... 00:33:58.175 [2024-11-20 13:54:55.348494] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.175 [2024-11-20 13:54:55.348553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.175 [2024-11-20 13:54:55.364940] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.175 [2024-11-20 13:54:55.365029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.175 [2024-11-20 13:54:55.376154] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.175 [2024-11-20 13:54:55.376242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.175 [2024-11-20 13:54:55.391203] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.175 [2024-11-20 13:54:55.391268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.175 [2024-11-20 13:54:55.407603] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.175 [2024-11-20 13:54:55.407684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.175 [2024-11-20 13:54:55.417927] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.175 [2024-11-20 13:54:55.418006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.175 [2024-11-20 13:54:55.431291] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.175 [2024-11-20 13:54:55.431356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.175 [2024-11-20 13:54:55.447129] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.175 [2024-11-20 13:54:55.447208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.175 [2024-11-20 13:54:55.463082] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.175 [2024-11-20 13:54:55.463167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.175 [2024-11-20 13:54:55.474991] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.175 [2024-11-20 13:54:55.475073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.175 [2024-11-20 13:54:55.490492] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.175 [2024-11-20 13:54:55.490568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.447 [2024-11-20 13:54:55.506461] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.447 [2024-11-20 13:54:55.506533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.447 [2024-11-20 13:54:55.519388] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.447 [2024-11-20 13:54:55.519468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.447 [2024-11-20 13:54:55.534542] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.447 [2024-11-20 13:54:55.534614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.447 [2024-11-20 13:54:55.551043] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.448 [2024-11-20 13:54:55.551146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.448 [2024-11-20 13:54:55.567714] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.448 [2024-11-20 13:54:55.567808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.448 [2024-11-20 13:54:55.583083] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.448 [2024-11-20 13:54:55.583164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.448 [2024-11-20 13:54:55.598158] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.448 [2024-11-20 13:54:55.598226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.448 [2024-11-20 13:54:55.613674] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.448 [2024-11-20 13:54:55.613758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.448 [2024-11-20 13:54:55.628501] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.448 [2024-11-20 13:54:55.628532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.448 [2024-11-20 13:54:55.644834] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.448 [2024-11-20 13:54:55.644867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.448 [2024-11-20 13:54:55.661267] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.448 [2024-11-20 13:54:55.661304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.448 [2024-11-20 13:54:55.678448] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.448 [2024-11-20 13:54:55.678483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.448 [2024-11-20 13:54:55.694136] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.448 [2024-11-20 13:54:55.694168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.448 [2024-11-20 13:54:55.706014] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.448 [2024-11-20 13:54:55.706047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.448 [2024-11-20 13:54:55.722404] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.448 [2024-11-20 13:54:55.722506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.448 [2024-11-20 13:54:55.735916] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.448 [2024-11-20 13:54:55.735952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.448 [2024-11-20 13:54:55.751770] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.448 [2024-11-20 13:54:55.751799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.448 [2024-11-20 13:54:55.762934] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.448 [2024-11-20 13:54:55.762982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.706 [2024-11-20 13:54:55.770248] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.706 [2024-11-20 13:54:55.770276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.706 [2024-11-20 13:54:55.781268] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.706 [2024-11-20 13:54:55.781372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.706 [2024-11-20 13:54:55.789664] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.706 [2024-11-20 13:54:55.789695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.706 [2024-11-20 13:54:55.799291] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.706 [2024-11-20 13:54:55.799321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.706 [2024-11-20 13:54:55.813835] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.706 [2024-11-20 13:54:55.813862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.706 [2024-11-20 13:54:55.824586] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.706 [2024-11-20 13:54:55.824667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.706 [2024-11-20 13:54:55.839394] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.706 [2024-11-20 13:54:55.839426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.706 [2024-11-20 13:54:55.851042] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.706 [2024-11-20 13:54:55.851074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.706 [2024-11-20 13:54:55.866408] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.706 [2024-11-20 13:54:55.866443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.706 [2024-11-20 13:54:55.882391] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.706 [2024-11-20 13:54:55.882424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.706 [2024-11-20 13:54:55.893337] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.706 [2024-11-20 13:54:55.893367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.706 [2024-11-20 13:54:55.908333] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.706 [2024-11-20 13:54:55.908364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.706 [2024-11-20 13:54:55.924342] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.706 [2024-11-20 13:54:55.924382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.706 [2024-11-20 13:54:55.936167] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.706 [2024-11-20 13:54:55.936198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.706 [2024-11-20 13:54:55.951351] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.706 [2024-11-20 13:54:55.951445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.706 [2024-11-20 13:54:55.967093] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.706 [2024-11-20 13:54:55.967122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.706 [2024-11-20 13:54:55.983478] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.706 [2024-11-20 13:54:55.983511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.706 [2024-11-20 13:54:56.000522] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.706 [2024-11-20 13:54:56.000562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.706 [2024-11-20 13:54:56.016199] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.706 [2024-11-20 13:54:56.016236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.706 [2024-11-20 13:54:56.027125] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.706 [2024-11-20 13:54:56.027158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.965 [2024-11-20 13:54:56.041449] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.965 [2024-11-20 13:54:56.041479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.965 [2024-11-20 13:54:56.055516] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.965 [2024-11-20 13:54:56.055545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.965 [2024-11-20 13:54:56.071302] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.965 [2024-11-20 13:54:56.071378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.965 [2024-11-20 13:54:56.087139] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.965 [2024-11-20 13:54:56.087211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.965 [2024-11-20 13:54:56.098598] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.965 [2024-11-20 13:54:56.098628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.965 [2024-11-20 13:54:56.113379] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.965 [2024-11-20 13:54:56.113448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.965 [2024-11-20 13:54:56.124803] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.965 [2024-11-20 13:54:56.124837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.965 [2024-11-20 13:54:56.140251] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.965 [2024-11-20 13:54:56.140330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.965 [2024-11-20 13:54:56.155756] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.965 [2024-11-20 13:54:56.155786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.965 [2024-11-20 13:54:56.170362] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.965 [2024-11-20 13:54:56.170393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.965 [2024-11-20 13:54:56.185956] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.965 [2024-11-20 13:54:56.185988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.965 [2024-11-20 13:54:56.200348] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.965 [2024-11-20 13:54:56.200384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.965 [2024-11-20 13:54:56.216289] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.965 [2024-11-20 13:54:56.216322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.965 [2024-11-20 13:54:56.232406] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.965 [2024-11-20 13:54:56.232461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.965 [2024-11-20 13:54:56.247768] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.965 [2024-11-20 13:54:56.247816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.965 [2024-11-20 13:54:56.263759] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.965 [2024-11-20 13:54:56.263794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.965 [2024-11-20 13:54:56.274573] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.965 [2024-11-20 13:54:56.274607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.226 [2024-11-20 13:54:56.290081] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.226 [2024-11-20 13:54:56.290115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.226 [2024-11-20 13:54:56.305432] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.226 [2024-11-20 13:54:56.305464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.226 [2024-11-20 13:54:56.319361] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.226 [2024-11-20 13:54:56.319466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.226 [2024-11-20 13:54:56.334073] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.226 [2024-11-20 13:54:56.334158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.226 14906.00 IOPS, 116.45 MiB/s [2024-11-20T13:54:56.549Z] [2024-11-20 13:54:56.346042] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.226 [2024-11-20 13:54:56.346088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.226 [2024-11-20 13:54:56.361853] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.226 [2024-11-20 13:54:56.361886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.226 [2024-11-20 13:54:56.377510] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.226 [2024-11-20 13:54:56.377612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.226 [2024-11-20 13:54:56.391889] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.226 [2024-11-20 13:54:56.391920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.226 [2024-11-20 13:54:56.406707] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.226 [2024-11-20 13:54:56.406769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.226 [2024-11-20 13:54:56.423598] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.226 [2024-11-20 13:54:56.423697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.226 [2024-11-20 13:54:56.438841] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.226 [2024-11-20 13:54:56.438872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.226 [2024-11-20 13:54:56.453223] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.226 [2024-11-20 13:54:56.453253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.226 [2024-11-20 13:54:56.467598] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.226 [2024-11-20 13:54:56.467629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.226 [2024-11-20 13:54:56.478580] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.226 [2024-11-20 13:54:56.478669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.226 [2024-11-20 13:54:56.494764] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.226 [2024-11-20 13:54:56.494798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.226 [2024-11-20 13:54:56.510497] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.226 [2024-11-20 13:54:56.510530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.227 [2024-11-20 13:54:56.524823] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.227 [2024-11-20 13:54:56.524853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.227 [2024-11-20 13:54:56.534946] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.227 [2024-11-20 13:54:56.534975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.486 [2024-11-20 13:54:56.551076] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.486 [2024-11-20 13:54:56.551121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.486 [2024-11-20 13:54:56.566030] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.486 [2024-11-20 13:54:56.566066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.486 [2024-11-20 13:54:56.582598] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.486 [2024-11-20 13:54:56.582632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.486 [2024-11-20 13:54:56.598790] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.486 [2024-11-20 13:54:56.598824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.486 [2024-11-20 13:54:56.610440] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.486 [2024-11-20 13:54:56.610478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.486 [2024-11-20 13:54:56.635157] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.486 [2024-11-20 13:54:56.635212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.486 [2024-11-20 13:54:56.668075] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.486 [2024-11-20 13:54:56.668216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.486 [2024-11-20 13:54:56.704713] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.486 [2024-11-20 13:54:56.704776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.486 [2024-11-20 13:54:56.731016] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.486 [2024-11-20 13:54:56.731134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.486 [2024-11-20 13:54:56.746793] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.486 [2024-11-20 13:54:56.746826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.486 [2024-11-20 13:54:56.762609] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.486 [2024-11-20 13:54:56.762733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.486 [2024-11-20 13:54:56.779396] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.486 [2024-11-20 13:54:56.779430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.486 [2024-11-20 13:54:56.794395] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.486 [2024-11-20 13:54:56.794477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.747 [2024-11-20 13:54:56.809801] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.747 [2024-11-20 13:54:56.809837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.747 [2024-11-20 13:54:56.825746] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.747 [2024-11-20 13:54:56.825779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.747 [2024-11-20 13:54:56.842253] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.747 [2024-11-20 13:54:56.842285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.747 [2024-11-20 13:54:56.853608] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.747 [2024-11-20 13:54:56.853640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.747 [2024-11-20 13:54:56.869341] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.747 [2024-11-20 13:54:56.869375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.747 [2024-11-20 13:54:56.885429] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.747 [2024-11-20 13:54:56.885463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.747 [2024-11-20 13:54:56.900915] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.747 [2024-11-20 13:54:56.900946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.747 [2024-11-20 13:54:56.914946] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.747 [2024-11-20 13:54:56.914976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.747 [2024-11-20 13:54:56.929879] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.747 [2024-11-20 13:54:56.929908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.747 [2024-11-20 13:54:56.945682] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.747 [2024-11-20 13:54:56.945723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.747 [2024-11-20 13:54:56.959499] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.747 [2024-11-20 13:54:56.959531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.747 [2024-11-20 13:54:56.974468] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.747 [2024-11-20 13:54:56.974499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.747 [2024-11-20 13:54:56.990482] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.747 [2024-11-20 13:54:56.990511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.747 [2024-11-20 13:54:57.001654] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.747 [2024-11-20 13:54:57.001685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.747 [2024-11-20 13:54:57.017337] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.747 [2024-11-20 13:54:57.017368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.747 [2024-11-20 13:54:57.033312] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.747 [2024-11-20 13:54:57.033345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.747 [2024-11-20 13:54:57.049849] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.747 [2024-11-20 13:54:57.049884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.747 [2024-11-20 13:54:57.065747] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.747 [2024-11-20 13:54:57.065776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.006 [2024-11-20 13:54:57.079755] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.007 [2024-11-20 13:54:57.079785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.007 [2024-11-20 13:54:57.094735] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.007 [2024-11-20 13:54:57.094763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.007 [2024-11-20 13:54:57.110480] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.007 [2024-11-20 13:54:57.110578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.007 [2024-11-20 13:54:57.125049] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.007 [2024-11-20 13:54:57.125117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.007 [2024-11-20 13:54:57.139619] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.007 [2024-11-20 13:54:57.139658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.007 [2024-11-20 13:54:57.165321] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.007 [2024-11-20 13:54:57.165360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.007 [2024-11-20 13:54:57.200100] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.007 [2024-11-20 13:54:57.200214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.007 [2024-11-20 13:54:57.235250] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.007 [2024-11-20 13:54:57.235366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.007 [2024-11-20 13:54:57.249648] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.007 [2024-11-20 13:54:57.249681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.007 [2024-11-20 13:54:57.264555] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.007 [2024-11-20 13:54:57.264586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.007 [2024-11-20 13:54:57.280701] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.007 [2024-11-20 13:54:57.280744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.007 [2024-11-20 13:54:57.296659] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.007 [2024-11-20 13:54:57.296690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.007 [2024-11-20 13:54:57.310243] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.007 [2024-11-20 13:54:57.310337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.007 [2024-11-20 13:54:57.326136] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.007 [2024-11-20 13:54:57.326168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.267 [2024-11-20 13:54:57.342535] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.267 [2024-11-20 13:54:57.342570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.267 13773.00 IOPS, 107.60 MiB/s [2024-11-20T13:54:57.590Z] [2024-11-20 13:54:57.357446] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.267 [2024-11-20 13:54:57.357544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.267 [2024-11-20 13:54:57.372368] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.267 [2024-11-20 13:54:57.372441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.267 [2024-11-20 13:54:57.388131] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.267 [2024-11-20 13:54:57.388174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.267 [2024-11-20 13:54:57.402106] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.267 [2024-11-20 13:54:57.402137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.267 [2024-11-20 13:54:57.416940] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.267 [2024-11-20 13:54:57.416969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.267 [2024-11-20 13:54:57.432650] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.267 [2024-11-20 13:54:57.432680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.267 [2024-11-20 13:54:57.447198] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.267 [2024-11-20 13:54:57.447290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.267 [2024-11-20 13:54:57.458132] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.267 [2024-11-20 13:54:57.458204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.267 [2024-11-20 13:54:57.473263] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.267 [2024-11-20 13:54:57.473334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.267 [2024-11-20 13:54:57.489934] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.267 [2024-11-20 13:54:57.489974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.267 [2024-11-20 13:54:57.506442] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.267 [2024-11-20 13:54:57.506472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.267 [2024-11-20 13:54:57.522443] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.267 [2024-11-20 13:54:57.522473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.267 [2024-11-20 13:54:57.536729] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.267 [2024-11-20 13:54:57.536757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.267 [2024-11-20 13:54:57.551743] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.267 [2024-11-20 13:54:57.551771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.267 [2024-11-20 13:54:57.567382] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.267 [2024-11-20 13:54:57.567477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.267 [2024-11-20 13:54:57.580565] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.267 [2024-11-20 13:54:57.580597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.527 [2024-11-20 13:54:57.596053] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.527 [2024-11-20 13:54:57.596084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.527 [2024-11-20 13:54:57.612267] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.527 [2024-11-20 13:54:57.612295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.527 [2024-11-20 13:54:57.623395] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.527 [2024-11-20 13:54:57.623424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.527 [2024-11-20 13:54:57.640080] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.527 [2024-11-20 13:54:57.640123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.527 [2024-11-20 13:54:57.654311] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.527 [2024-11-20 13:54:57.654418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.527 [2024-11-20 13:54:57.669575] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.527 [2024-11-20 13:54:57.669647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.527 [2024-11-20 13:54:57.684793] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.527 [2024-11-20 13:54:57.684819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.527 [2024-11-20 13:54:57.700205] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.527 [2024-11-20 13:54:57.700280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.527 [2024-11-20 13:54:57.715619] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.527 [2024-11-20 13:54:57.715697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.527 [2024-11-20 13:54:57.730894] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.527 [2024-11-20 13:54:57.730943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.527 [2024-11-20 13:54:57.750185] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.527 [2024-11-20 13:54:57.750288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.527 [2024-11-20 13:54:57.764078] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.527 [2024-11-20 13:54:57.764123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.527 [2024-11-20 13:54:57.780181] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.527 [2024-11-20 13:54:57.780218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.527 [2024-11-20 13:54:57.795599] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.527 [2024-11-20 13:54:57.795630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.527 [2024-11-20 13:54:57.806659] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.527 [2024-11-20 13:54:57.806689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.527 [2024-11-20 13:54:57.821799] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.527 [2024-11-20 13:54:57.821828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.527 [2024-11-20 13:54:57.837124] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.527 [2024-11-20 13:54:57.837221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.787 [2024-11-20 13:54:57.852837] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.787 [2024-11-20 13:54:57.852882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.787 [2024-11-20 13:54:57.868884] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.787 [2024-11-20 13:54:57.868944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.788 [2024-11-20 13:54:57.880488] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.788 [2024-11-20 13:54:57.880603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.788 [2024-11-20 13:54:57.895699] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.788 [2024-11-20 13:54:57.895744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.788 [2024-11-20 13:54:57.912259] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.788 [2024-11-20 13:54:57.912338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.788 [2024-11-20 13:54:57.927858] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.788 [2024-11-20 13:54:57.927888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.788 [2024-11-20 13:54:57.943144] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.788 [2024-11-20 13:54:57.943228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.788 [2024-11-20 13:54:57.958997] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.788 [2024-11-20 13:54:57.959032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.788 [2024-11-20 13:54:57.972942] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.788 [2024-11-20 13:54:57.972981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.788 [2024-11-20 13:54:57.988023] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.788 [2024-11-20 13:54:57.988057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.788 [2024-11-20 13:54:58.003786] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.788 [2024-11-20 13:54:58.003821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.788 [2024-11-20 13:54:58.018731] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.788 [2024-11-20 13:54:58.018774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.788 [2024-11-20 13:54:58.033960] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.788 [2024-11-20 13:54:58.034060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.788 [2024-11-20 13:54:58.049378] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.788 [2024-11-20 13:54:58.049466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.788 [2024-11-20 13:54:58.065082] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.788 [2024-11-20 13:54:58.065146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.788 [2024-11-20 13:54:58.079506] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.788 [2024-11-20 13:54:58.079570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.788 [2024-11-20 13:54:58.095774] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.788 [2024-11-20 13:54:58.095836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.048 [2024-11-20 13:54:58.111694] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.048 [2024-11-20 13:54:58.111767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.048 [2024-11-20 13:54:58.122849] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.048 [2024-11-20 13:54:58.122908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.048 [2024-11-20 13:54:58.137976] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.048 [2024-11-20 13:54:58.138038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.048 [2024-11-20 13:54:58.153482] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.048 [2024-11-20 13:54:58.153549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.048 [2024-11-20 13:54:58.167928] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.048 [2024-11-20 13:54:58.168014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.048 [2024-11-20 13:54:58.178882] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.048 [2024-11-20 13:54:58.178952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.048 [2024-11-20 13:54:58.194496] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.048 [2024-11-20 13:54:58.194558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.048 [2024-11-20 13:54:58.209551] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.048 [2024-11-20 13:54:58.209612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.048 [2024-11-20 13:54:58.224198] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.048 [2024-11-20 13:54:58.224261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.048 [2024-11-20 13:54:58.239922] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.048 [2024-11-20 13:54:58.239985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.048 [2024-11-20 13:54:58.254031] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.048 [2024-11-20 13:54:58.254091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.048 [2024-11-20 13:54:58.264750] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.048 [2024-11-20 13:54:58.264810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.048 [2024-11-20 13:54:58.279836] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.048 [2024-11-20 13:54:58.279899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.048 [2024-11-20 13:54:58.295385] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.048 [2024-11-20 13:54:58.295447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.048 [2024-11-20 13:54:58.309925] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.048 [2024-11-20 13:54:58.309989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.048 [2024-11-20 13:54:58.320372] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.048 [2024-11-20 13:54:58.320441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.048 [2024-11-20 13:54:58.336545] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.048 [2024-11-20 13:54:58.336672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.048 14253.33 IOPS, 111.35 MiB/s [2024-11-20T13:54:58.371Z] [2024-11-20 13:54:58.352331] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.048 [2024-11-20 13:54:58.352457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.048 [2024-11-20 13:54:58.368181] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.048 [2024-11-20 13:54:58.368255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.309 [2024-11-20 13:54:58.383684] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.309 [2024-11-20 13:54:58.383770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.309 [2024-11-20 13:54:58.397378] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.309 [2024-11-20 13:54:58.397439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.309 [2024-11-20 13:54:58.411543] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.309 [2024-11-20 13:54:58.411603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.309 [2024-11-20 13:54:58.422232] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.309 [2024-11-20 13:54:58.422292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.309 [2024-11-20 13:54:58.437050] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.309 [2024-11-20 13:54:58.437136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.309 [2024-11-20 13:54:58.452661] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.309 [2024-11-20 13:54:58.452771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.309 [2024-11-20 13:54:58.470609] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.309 [2024-11-20 13:54:58.470673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.309 [2024-11-20 13:54:58.485699] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.309 [2024-11-20 13:54:58.485771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.309 [2024-11-20 13:54:58.501232] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.309 [2024-11-20 13:54:58.501293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.309 [2024-11-20 13:54:58.516306] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.309 [2024-11-20 13:54:58.516366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.309 [2024-11-20 13:54:58.532231] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.309 [2024-11-20 13:54:58.532290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.309 [2024-11-20 13:54:58.547676] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.309 [2024-11-20 13:54:58.547757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.309 [2024-11-20 13:54:58.562797] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.309 [2024-11-20 13:54:58.562857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.309 [2024-11-20 13:54:58.578358] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.309 [2024-11-20 13:54:58.578418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.309 [2024-11-20 13:54:58.593206] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.309 [2024-11-20 13:54:58.593269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.309 [2024-11-20 13:54:58.609919] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.309 [2024-11-20 13:54:58.609979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.309 [2024-11-20 13:54:58.625907] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.309 [2024-11-20 13:54:58.625974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.569 [2024-11-20 13:54:58.637689] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.569 [2024-11-20 13:54:58.637824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.569 [2024-11-20 13:54:58.653345] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.569 [2024-11-20 13:54:58.653414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.569 [2024-11-20 13:54:58.668395] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.569 [2024-11-20 13:54:58.668459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.569 [2024-11-20 13:54:58.683245] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.569 [2024-11-20 13:54:58.683310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.569 [2024-11-20 13:54:58.699586] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.569 [2024-11-20 13:54:58.699675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.569 [2024-11-20 13:54:58.710956] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.569 [2024-11-20 13:54:58.711069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.569 [2024-11-20 13:54:58.726355] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.569 [2024-11-20 13:54:58.726447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.569 [2024-11-20 13:54:58.742150] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.569 [2024-11-20 13:54:58.742222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.569 [2024-11-20 13:54:58.756287] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.569 [2024-11-20 13:54:58.756348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.570 [2024-11-20 13:54:58.767732] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.570 [2024-11-20 13:54:58.767792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.570 [2024-11-20 13:54:58.783121] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.570 [2024-11-20 13:54:58.783186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.570 [2024-11-20 13:54:58.798394] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.570 [2024-11-20 13:54:58.798460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.570 [2024-11-20 13:54:58.813277] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.570 [2024-11-20 13:54:58.813336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.570 [2024-11-20 13:54:58.824406] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.570 [2024-11-20 13:54:58.824491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.570 [2024-11-20 13:54:58.839197] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.570 [2024-11-20 13:54:58.839309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.570 [2024-11-20 13:54:58.855733] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.570 [2024-11-20 13:54:58.855806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.570 [2024-11-20 13:54:58.866601] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.570 [2024-11-20 13:54:58.866665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.570 [2024-11-20 13:54:58.881800] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.570 [2024-11-20 13:54:58.881825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.830 [2024-11-20 13:54:58.897557] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.830 [2024-11-20 13:54:58.897634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.830 [2024-11-20 13:54:58.911567] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.830 [2024-11-20 13:54:58.911600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.830 [2024-11-20 13:54:58.926575] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.830 [2024-11-20 13:54:58.926604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.830 [2024-11-20 13:54:58.942669] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.830 [2024-11-20 13:54:58.942701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.830 [2024-11-20 13:54:58.953813] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.830 [2024-11-20 13:54:58.953844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.830 [2024-11-20 13:54:58.969287] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.830 [2024-11-20 13:54:58.969331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.830 [2024-11-20 13:54:58.985052] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.830 [2024-11-20 13:54:58.985086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.830 [2024-11-20 13:54:59.002403] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.830 [2024-11-20 13:54:59.002492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.830 [2024-11-20 13:54:59.017677] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.830 [2024-11-20 13:54:59.017765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.830 [2024-11-20 13:54:59.032605] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.830 [2024-11-20 13:54:59.032638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.830 [2024-11-20 13:54:59.048791] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.830 [2024-11-20 13:54:59.048823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.830 [2024-11-20 13:54:59.064910] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.830 [2024-11-20 13:54:59.064939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.830 [2024-11-20 13:54:59.079757] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.830 [2024-11-20 13:54:59.079787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.830 [2024-11-20 13:54:59.095598] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.830 [2024-11-20 13:54:59.095687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.830 [2024-11-20 13:54:59.109799] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.830 [2024-11-20 13:54:59.109827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.830 [2024-11-20 13:54:59.125973] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.830 [2024-11-20 13:54:59.126006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.830 [2024-11-20 13:54:59.139939] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.830 [2024-11-20 13:54:59.139977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.109 [2024-11-20 13:54:59.154537] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.109 [2024-11-20 13:54:59.154567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.109 [2024-11-20 13:54:59.170812] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.109 [2024-11-20 13:54:59.170841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.109 [2024-11-20 13:54:59.186142] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.109 [2024-11-20 13:54:59.186238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.109 [2024-11-20 13:54:59.200855] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.109 [2024-11-20 13:54:59.200885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.109 [2024-11-20 13:54:59.211686] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.109 [2024-11-20 13:54:59.211726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.109 [2024-11-20 13:54:59.226399] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.109 [2024-11-20 13:54:59.226429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.109 [2024-11-20 13:54:59.237761] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.109 [2024-11-20 13:54:59.237802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.109 [2024-11-20 13:54:59.252880] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.109 [2024-11-20 13:54:59.252908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.109 [2024-11-20 13:54:59.268656] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.109 [2024-11-20 13:54:59.268766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.109 [2024-11-20 13:54:59.283275] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.109 [2024-11-20 13:54:59.283304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.109 [2024-11-20 13:54:59.293914] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.109 [2024-11-20 13:54:59.293941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.109 [2024-11-20 13:54:59.308697] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.109 [2024-11-20 13:54:59.308781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.109 [2024-11-20 13:54:59.323844] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.109 [2024-11-20 13:54:59.323874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.109 [2024-11-20 13:54:59.338311] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.109 [2024-11-20 13:54:59.338349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.109 14561.25 IOPS, 113.76 MiB/s [2024-11-20T13:54:59.432Z] [2024-11-20 13:54:59.349755] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.109 [2024-11-20 13:54:59.349846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.109 [2024-11-20 13:54:59.364558] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.109 [2024-11-20 13:54:59.364624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.109 [2024-11-20 13:54:59.375162] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.109 [2024-11-20 13:54:59.375225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.109 [2024-11-20 13:54:59.390356] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.109 [2024-11-20 13:54:59.390415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.109 [2024-11-20 13:54:59.406205] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.109 [2024-11-20 13:54:59.406270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.397 [2024-11-20 13:54:59.419937] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.397 [2024-11-20 13:54:59.420017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.397 [2024-11-20 13:54:59.435571] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.397 [2024-11-20 13:54:59.435652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.397 [2024-11-20 13:54:59.452392] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.397 [2024-11-20 13:54:59.452461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.397 [2024-11-20 13:54:59.468365] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.397 [2024-11-20 13:54:59.468435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.397 [2024-11-20 13:54:59.484736] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.397 [2024-11-20 13:54:59.484805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.397 [2024-11-20 13:54:59.495401] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.397 [2024-11-20 13:54:59.495467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.397 [2024-11-20 13:54:59.510762] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.397 [2024-11-20 13:54:59.510828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.397 [2024-11-20 13:54:59.527833] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.397 [2024-11-20 13:54:59.527912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.397 [2024-11-20 13:54:59.543290] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.397 [2024-11-20 13:54:59.543360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.397 [2024-11-20 13:54:59.557492] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.397 [2024-11-20 13:54:59.557558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.397 [2024-11-20 13:54:59.572131] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.397 [2024-11-20 13:54:59.572215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.397 [2024-11-20 13:54:59.588582] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.397 [2024-11-20 13:54:59.588660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.397 [2024-11-20 13:54:59.599600] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.397 [2024-11-20 13:54:59.599672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.397 [2024-11-20 13:54:59.614454] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.397 [2024-11-20 13:54:59.614529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.397 [2024-11-20 13:54:59.630573] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.397 [2024-11-20 13:54:59.630648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.397 [2024-11-20 13:54:59.645264] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.397 [2024-11-20 13:54:59.645354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.397 [2024-11-20 13:54:59.656136] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.397 [2024-11-20 13:54:59.656208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.397 [2024-11-20 13:54:59.671440] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.397 [2024-11-20 13:54:59.671524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.397 [2024-11-20 13:54:59.687954] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.397 [2024-11-20 13:54:59.687987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.397 [2024-11-20 13:54:59.704055] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.397 [2024-11-20 13:54:59.704086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.397 [2024-11-20 13:54:59.715478] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.397 [2024-11-20 13:54:59.715508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.656 [2024-11-20 13:54:59.731333] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.656 [2024-11-20 13:54:59.731366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.656 [2024-11-20 13:54:59.746970] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.656 [2024-11-20 13:54:59.747000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.656 [2024-11-20 13:54:59.761535] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.656 [2024-11-20 13:54:59.761564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.656 [2024-11-20 13:54:59.778005] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.656 [2024-11-20 13:54:59.778099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.656 [2024-11-20 13:54:59.793976] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.656 [2024-11-20 13:54:59.794012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.656 [2024-11-20 13:54:59.809556] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.656 [2024-11-20 13:54:59.809682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.656 [2024-11-20 13:54:59.826335] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.656 [2024-11-20 13:54:59.826377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.656 [2024-11-20 13:54:59.843192] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.656 [2024-11-20 13:54:59.843231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.656 [2024-11-20 13:54:59.858701] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.656 [2024-11-20 13:54:59.858741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.656 [2024-11-20 13:54:59.874319] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.656 [2024-11-20 13:54:59.874355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.656 [2024-11-20 13:54:59.889466] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.656 [2024-11-20 13:54:59.889565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.656 [2024-11-20 13:54:59.905014] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.656 [2024-11-20 13:54:59.905046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.656 [2024-11-20 13:54:59.920098] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.656 [2024-11-20 13:54:59.920207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.656 [2024-11-20 13:54:59.933918] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.656 [2024-11-20 13:54:59.933951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.656 [2024-11-20 13:54:59.948973] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.656 [2024-11-20 13:54:59.949002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.656 [2024-11-20 13:54:59.964991] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.656 [2024-11-20 13:54:59.965033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.656 [2024-11-20 13:54:59.976219] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.656 [2024-11-20 13:54:59.976260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.916 [2024-11-20 13:54:59.991867] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.916 [2024-11-20 13:54:59.991901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.916 [2024-11-20 13:55:00.006597] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.916 [2024-11-20 13:55:00.006629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.916 [2024-11-20 13:55:00.020646] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.916 [2024-11-20 13:55:00.020677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.916 [2024-11-20 13:55:00.036024] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.916 [2024-11-20 13:55:00.036065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.916 [2024-11-20 13:55:00.051719] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.916 [2024-11-20 13:55:00.051777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.916 [2024-11-20 13:55:00.066463] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.916 [2024-11-20 13:55:00.066510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.916 [2024-11-20 13:55:00.077819] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.916 [2024-11-20 13:55:00.077850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.916 [2024-11-20 13:55:00.094149] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.916 [2024-11-20 13:55:00.094245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.916 [2024-11-20 13:55:00.109569] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.916 [2024-11-20 13:55:00.109602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.916 [2024-11-20 13:55:00.120281] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.916 [2024-11-20 13:55:00.120369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.916 [2024-11-20 13:55:00.135892] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.916 [2024-11-20 13:55:00.135924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.916 [2024-11-20 13:55:00.152148] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.916 [2024-11-20 13:55:00.152179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.916 [2024-11-20 13:55:00.166725] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.916 [2024-11-20 13:55:00.166756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.916 [2024-11-20 13:55:00.183311] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.916 [2024-11-20 13:55:00.183411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.916 [2024-11-20 13:55:00.199546] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.916 [2024-11-20 13:55:00.199589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.916 [2024-11-20 13:55:00.210584] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.916 [2024-11-20 13:55:00.210680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:02.916 [2024-11-20 13:55:00.225736] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:02.916 [2024-11-20 13:55:00.225766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.176 [2024-11-20 13:55:00.241830] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.176 [2024-11-20 13:55:00.241859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.176 [2024-11-20 13:55:00.252864] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.176 [2024-11-20 13:55:00.252891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.176 [2024-11-20 13:55:00.267666] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.176 [2024-11-20 13:55:00.267696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.176 [2024-11-20 13:55:00.283234] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.176 [2024-11-20 13:55:00.283263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.176 [2024-11-20 13:55:00.297635] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.176 [2024-11-20 13:55:00.297666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.176 [2024-11-20 13:55:00.313861] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.176 [2024-11-20 13:55:00.313891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.176 [2024-11-20 13:55:00.330321] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.176 [2024-11-20 13:55:00.330354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.176 14684.80 IOPS, 114.72 MiB/s [2024-11-20T13:55:00.499Z] [2024-11-20 13:55:00.344237] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.176 00:34:03.176 Latency(us) 00:34:03.176 [2024-11-20T13:55:00.499Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:03.176 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:34:03.176 Nvme1n1 : 5.01 14686.88 114.74 0.00 0.00 8705.91 3276.80 64105.08 00:34:03.176 [2024-11-20T13:55:00.499Z] =================================================================================================================== 00:34:03.176 [2024-11-20T13:55:00.499Z] Total : 14686.88 114.74 0.00 0.00 8705.91 3276.80 64105.08 00:34:03.176 [2024-11-20 13:55:00.344349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.176 [2024-11-20 13:55:00.354237] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.176 [2024-11-20 13:55:00.354265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.176 [2024-11-20 13:55:00.366192] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.176 [2024-11-20 13:55:00.366214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.176 [2024-11-20 13:55:00.378170] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.176 [2024-11-20 13:55:00.378219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.176 [2024-11-20 13:55:00.390145] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.176 [2024-11-20 13:55:00.390198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.176 [2024-11-20 13:55:00.402147] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.176 [2024-11-20 13:55:00.402214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.176 [2024-11-20 13:55:00.414119] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.176 [2024-11-20 13:55:00.414180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.176 [2024-11-20 13:55:00.426088] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.176 [2024-11-20 13:55:00.426140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.176 [2024-11-20 13:55:00.438084] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.176 [2024-11-20 13:55:00.438142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.176 [2024-11-20 13:55:00.450065] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.176 [2024-11-20 13:55:00.450124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.176 [2024-11-20 13:55:00.462048] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.176 [2024-11-20 13:55:00.462107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.176 [2024-11-20 13:55:00.474016] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.176 [2024-11-20 13:55:00.474072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.176 [2024-11-20 13:55:00.485997] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.176 [2024-11-20 13:55:00.486058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.437 [2024-11-20 13:55:00.497990] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.437 [2024-11-20 13:55:00.498056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.437 [2024-11-20 13:55:00.509970] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.437 [2024-11-20 13:55:00.509997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.437 [2024-11-20 13:55:00.521946] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.437 [2024-11-20 13:55:00.521972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.437 [2024-11-20 13:55:00.533916] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:03.437 [2024-11-20 13:55:00.533939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:03.437 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (65707) - No such process 00:34:03.437 13:55:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 65707 00:34:03.437 13:55:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:03.437 13:55:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.437 13:55:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:03.437 13:55:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.437 13:55:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:34:03.437 13:55:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.437 13:55:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:03.437 delay0 00:34:03.437 13:55:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.437 13:55:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:34:03.437 13:55:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.437 13:55:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:03.437 13:55:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.437 13:55:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:34:03.697 [2024-11-20 13:55:00.770775] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:34:10.266 Initializing NVMe Controllers 00:34:10.266 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:34:10.266 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:34:10.266 Initialization complete. Launching workers. 00:34:10.266 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 117 00:34:10.266 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 404, failed to submit 33 00:34:10.266 success 283, unsuccessful 121, failed 0 00:34:10.266 13:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:34:10.266 13:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:34:10.266 13:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:10.266 13:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:34:10.266 13:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:10.266 13:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:34:10.266 13:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:10.266 13:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:10.266 rmmod nvme_tcp 00:34:10.266 rmmod nvme_fabrics 00:34:10.266 rmmod nvme_keyring 00:34:10.266 13:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:10.266 13:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:34:10.266 13:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:34:10.266 13:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 65551 ']' 00:34:10.266 13:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 65551 00:34:10.266 13:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 65551 ']' 00:34:10.267 13:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 65551 00:34:10.267 13:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:34:10.267 13:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:10.267 13:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65551 00:34:10.267 13:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:10.267 killing process with pid 65551 00:34:10.267 13:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:10.267 13:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65551' 00:34:10.267 13:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 65551 00:34:10.267 13:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 65551 00:34:10.267 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:10.267 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:10.267 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:10.267 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:34:10.267 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:34:10.267 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:10.267 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:34:10.267 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:10.267 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:34:10.267 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:34:10.267 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:34:10.267 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:34:10.267 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:34:10.267 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:34:10.267 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:34:10.267 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:34:10.267 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:34:10.267 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:34:10.267 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:34:10.267 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:34:10.267 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:10.267 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:10.267 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:34:10.267 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:10.267 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:10.267 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:10.267 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:34:10.267 00:34:10.267 real 0m25.013s 00:34:10.267 user 0m40.418s 00:34:10.267 sys 0m7.067s 00:34:10.267 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:10.267 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:10.267 ************************************ 00:34:10.267 END TEST nvmf_zcopy 00:34:10.267 ************************************ 00:34:10.267 13:55:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:34:10.267 13:55:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:10.267 13:55:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:10.267 13:55:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:34:10.267 ************************************ 00:34:10.267 START TEST nvmf_nmic 00:34:10.267 ************************************ 00:34:10.267 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:34:10.528 * Looking for test storage... 00:34:10.528 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:10.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:10.528 --rc genhtml_branch_coverage=1 00:34:10.528 --rc genhtml_function_coverage=1 00:34:10.528 --rc genhtml_legend=1 00:34:10.528 --rc geninfo_all_blocks=1 00:34:10.528 --rc geninfo_unexecuted_blocks=1 00:34:10.528 00:34:10.528 ' 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:10.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:10.528 --rc genhtml_branch_coverage=1 00:34:10.528 --rc genhtml_function_coverage=1 00:34:10.528 --rc genhtml_legend=1 00:34:10.528 --rc geninfo_all_blocks=1 00:34:10.528 --rc geninfo_unexecuted_blocks=1 00:34:10.528 00:34:10.528 ' 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:10.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:10.528 --rc genhtml_branch_coverage=1 00:34:10.528 --rc genhtml_function_coverage=1 00:34:10.528 --rc genhtml_legend=1 00:34:10.528 --rc geninfo_all_blocks=1 00:34:10.528 --rc geninfo_unexecuted_blocks=1 00:34:10.528 00:34:10.528 ' 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:10.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:10.528 --rc genhtml_branch_coverage=1 00:34:10.528 --rc genhtml_function_coverage=1 00:34:10.528 --rc genhtml_legend=1 00:34:10.528 --rc geninfo_all_blocks=1 00:34:10.528 --rc geninfo_unexecuted_blocks=1 00:34:10.528 00:34:10.528 ' 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=105ec898-1662-46bd-85be-b241e399edb9 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:10.528 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:10.528 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:10.529 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:34:10.529 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:10.529 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:10.529 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:10.529 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:10.529 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:10.529 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:10.529 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:10.529 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:10.529 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:34:10.529 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:34:10.529 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:34:10.529 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:34:10.529 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:34:10.529 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:34:10.529 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:10.529 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:34:10.529 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:34:10.529 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:34:10.529 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:10.529 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:34:10.529 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:34:10.529 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:34:10.529 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:34:10.529 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:34:10.529 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:34:10.529 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:10.529 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:34:10.529 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:34:10.529 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:34:10.529 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:34:10.529 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:34:10.789 Cannot find device "nvmf_init_br" 00:34:10.789 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:34:10.789 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:34:10.789 Cannot find device "nvmf_init_br2" 00:34:10.789 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:34:10.789 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:34:10.789 Cannot find device "nvmf_tgt_br" 00:34:10.789 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:34:10.789 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:34:10.789 Cannot find device "nvmf_tgt_br2" 00:34:10.789 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:34:10.789 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:34:10.789 Cannot find device "nvmf_init_br" 00:34:10.789 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:34:10.789 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:34:10.789 Cannot find device "nvmf_init_br2" 00:34:10.789 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:34:10.789 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:34:10.789 Cannot find device "nvmf_tgt_br" 00:34:10.789 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:34:10.789 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:34:10.789 Cannot find device "nvmf_tgt_br2" 00:34:10.789 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:34:10.789 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:34:10.789 Cannot find device "nvmf_br" 00:34:10.789 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:34:10.789 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:34:10.789 Cannot find device "nvmf_init_if" 00:34:10.789 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:34:10.789 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:34:10.789 Cannot find device "nvmf_init_if2" 00:34:10.789 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:34:10.789 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:10.789 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:10.789 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:34:10.789 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:10.789 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:10.789 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:34:10.789 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:34:10.789 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:34:10.789 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:34:10.789 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:34:10.789 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:34:10.789 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:34:10.789 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:34:10.789 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:34:10.789 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:34:10.789 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:34:10.789 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:34:10.789 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:34:11.050 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:34:11.050 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:34:11.050 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:34:11.050 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:34:11.050 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:34:11.050 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:34:11.050 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:34:11.050 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:34:11.050 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:34:11.050 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:34:11.051 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:34:11.051 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:34:11.051 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:34:11.051 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:34:11.051 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:34:11.051 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:34:11.051 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:34:11.051 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:34:11.051 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:34:11.051 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:34:11.051 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:34:11.051 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:34:11.051 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.133 ms 00:34:11.051 00:34:11.051 --- 10.0.0.3 ping statistics --- 00:34:11.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:11.051 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:34:11.051 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:34:11.051 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:34:11.051 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:34:11.051 00:34:11.051 --- 10.0.0.4 ping statistics --- 00:34:11.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:11.051 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:34:11.051 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:34:11.051 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:11.051 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms 00:34:11.051 00:34:11.051 --- 10.0.0.1 ping statistics --- 00:34:11.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:11.051 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:34:11.051 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:34:11.051 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:11.051 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:34:11.051 00:34:11.051 --- 10.0.0.2 ping statistics --- 00:34:11.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:11.051 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:34:11.051 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:11.051 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:34:11.051 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:11.051 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:11.051 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:11.051 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:11.051 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:11.051 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:11.051 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:11.051 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:34:11.051 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:11.051 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:11.051 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:11.051 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=66089 00:34:11.051 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:34:11.051 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 66089 00:34:11.051 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 66089 ']' 00:34:11.051 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:11.051 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:11.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:11.051 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:11.051 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:11.051 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:11.051 [2024-11-20 13:55:08.337496] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:34:11.051 [2024-11-20 13:55:08.337566] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:11.340 [2024-11-20 13:55:08.490782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:11.340 [2024-11-20 13:55:08.552531] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:11.340 [2024-11-20 13:55:08.552583] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:11.340 [2024-11-20 13:55:08.552607] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:11.340 [2024-11-20 13:55:08.552613] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:11.340 [2024-11-20 13:55:08.552617] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:11.340 [2024-11-20 13:55:08.553618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:11.340 [2024-11-20 13:55:08.553798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:11.340 [2024-11-20 13:55:08.553910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:11.340 [2024-11-20 13:55:08.553914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:11.340 [2024-11-20 13:55:08.598617] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:34:11.907 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:11.907 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:34:11.907 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:11.907 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:11.907 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:12.164 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:12.164 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:12.164 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.164 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:12.164 [2024-11-20 13:55:09.278995] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:12.164 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.164 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:12.164 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.164 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:12.164 Malloc0 00:34:12.164 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.164 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:12.164 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.164 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:12.164 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.164 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:12.164 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.164 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:12.164 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.164 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:34:12.164 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.164 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:12.164 [2024-11-20 13:55:09.340337] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:34:12.164 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.164 test case1: single bdev can't be used in multiple subsystems 00:34:12.164 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:34:12.164 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:34:12.164 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.164 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:12.164 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.164 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:34:12.164 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.164 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:12.164 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.164 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:34:12.164 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:34:12.164 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.164 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:12.164 [2024-11-20 13:55:09.368228] bdev.c:8507:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:34:12.164 [2024-11-20 13:55:09.368263] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:34:12.164 [2024-11-20 13:55:09.368271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:12.164 request: 00:34:12.164 { 00:34:12.164 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:34:12.164 "namespace": { 00:34:12.164 "bdev_name": "Malloc0", 00:34:12.164 "no_auto_visible": false, 00:34:12.164 "hide_metadata": false 00:34:12.164 }, 00:34:12.164 "method": "nvmf_subsystem_add_ns", 00:34:12.164 "req_id": 1 00:34:12.164 } 00:34:12.164 Got JSON-RPC error response 00:34:12.164 response: 00:34:12.164 { 00:34:12.164 "code": -32602, 00:34:12.164 "message": "Invalid parameters" 00:34:12.164 } 00:34:12.164 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:12.164 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:34:12.164 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:34:12.164 Adding namespace failed - expected result. 00:34:12.164 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:34:12.164 test case2: host connect to nvmf target in multiple paths 00:34:12.164 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:34:12.164 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:34:12.164 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.164 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:12.164 [2024-11-20 13:55:09.380297] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:34:12.164 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.164 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid=105ec898-1662-46bd-85be-b241e399edb9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:34:12.423 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid=105ec898-1662-46bd-85be-b241e399edb9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:34:12.423 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:34:12.423 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:34:12.423 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:34:12.423 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:34:12.423 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:34:14.953 13:55:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:34:14.953 13:55:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:34:14.953 13:55:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:34:14.953 13:55:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:34:14.953 13:55:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:34:14.953 13:55:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:34:14.953 13:55:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:34:14.953 [global] 00:34:14.953 thread=1 00:34:14.953 invalidate=1 00:34:14.953 rw=write 00:34:14.953 time_based=1 00:34:14.953 runtime=1 00:34:14.953 ioengine=libaio 00:34:14.953 direct=1 00:34:14.953 bs=4096 00:34:14.953 iodepth=1 00:34:14.953 norandommap=0 00:34:14.953 numjobs=1 00:34:14.953 00:34:14.953 verify_dump=1 00:34:14.953 verify_backlog=512 00:34:14.953 verify_state_save=0 00:34:14.953 do_verify=1 00:34:14.953 verify=crc32c-intel 00:34:14.953 [job0] 00:34:14.953 filename=/dev/nvme0n1 00:34:14.953 Could not set queue depth (nvme0n1) 00:34:14.953 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:14.953 fio-3.35 00:34:14.953 Starting 1 thread 00:34:15.891 00:34:15.891 job0: (groupid=0, jobs=1): err= 0: pid=66175: Wed Nov 20 13:55:12 2024 00:34:15.891 read: IOPS=2377, BW=9508KiB/s (9736kB/s)(9508KiB/1000msec) 00:34:15.891 slat (nsec): min=8929, max=37886, avg=11396.97, stdev=2085.02 00:34:15.891 clat (usec): min=141, max=454, avg=232.73, stdev=36.09 00:34:15.891 lat (usec): min=152, max=465, avg=244.12, stdev=36.08 00:34:15.891 clat percentiles (usec): 00:34:15.891 | 1.00th=[ 151], 5.00th=[ 169], 10.00th=[ 190], 20.00th=[ 206], 00:34:15.891 | 30.00th=[ 217], 40.00th=[ 225], 50.00th=[ 233], 60.00th=[ 241], 00:34:15.891 | 70.00th=[ 249], 80.00th=[ 260], 90.00th=[ 273], 95.00th=[ 289], 00:34:15.891 | 99.00th=[ 343], 99.50th=[ 351], 99.90th=[ 383], 99.95th=[ 388], 00:34:15.891 | 99.99th=[ 457] 00:34:15.891 write: IOPS=2560, BW=10.0MiB/s (10.5MB/s)(10.0MiB/1000msec); 0 zone resets 00:34:15.891 slat (usec): min=12, max=141, avg=17.24, stdev= 7.11 00:34:15.891 clat (usec): min=83, max=311, avg=144.46, stdev=22.95 00:34:15.891 lat (usec): min=99, max=418, avg=161.70, stdev=23.69 00:34:15.891 clat percentiles (usec): 00:34:15.891 | 1.00th=[ 94], 5.00th=[ 103], 10.00th=[ 113], 20.00th=[ 127], 00:34:15.891 | 30.00th=[ 135], 40.00th=[ 143], 50.00th=[ 147], 60.00th=[ 151], 00:34:15.891 | 70.00th=[ 157], 80.00th=[ 161], 90.00th=[ 172], 95.00th=[ 178], 00:34:15.891 | 99.00th=[ 200], 99.50th=[ 210], 99.90th=[ 277], 99.95th=[ 281], 00:34:15.891 | 99.99th=[ 310] 00:34:15.891 bw ( KiB/s): min=12080, max=12080, per=100.00%, avg=12080.00, stdev= 0.00, samples=1 00:34:15.891 iops : min= 3020, max= 3020, avg=3020.00, stdev= 0.00, samples=1 00:34:15.891 lat (usec) : 100=1.86%, 250=84.36%, 500=13.77% 00:34:15.891 cpu : usr=0.90%, sys=5.30%, ctx=4937, majf=0, minf=5 00:34:15.891 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:15.891 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:15.891 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:15.891 issued rwts: total=2377,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:15.891 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:15.891 00:34:15.891 Run status group 0 (all jobs): 00:34:15.891 READ: bw=9508KiB/s (9736kB/s), 9508KiB/s-9508KiB/s (9736kB/s-9736kB/s), io=9508KiB (9736kB), run=1000-1000msec 00:34:15.891 WRITE: bw=10.0MiB/s (10.5MB/s), 10.0MiB/s-10.0MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1000-1000msec 00:34:15.891 00:34:15.891 Disk stats (read/write): 00:34:15.891 nvme0n1: ios=2098/2450, merge=0/0, ticks=501/372, in_queue=873, util=91.68% 00:34:15.891 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:15.891 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:34:15.891 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:15.891 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:34:15.891 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:34:15.891 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:15.891 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:34:15.891 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:15.891 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:34:15.891 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:34:15.891 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:34:15.891 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:15.891 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:34:15.891 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:15.891 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:34:15.891 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:15.891 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:15.891 rmmod nvme_tcp 00:34:15.891 rmmod nvme_fabrics 00:34:15.891 rmmod nvme_keyring 00:34:15.891 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:15.891 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:34:15.891 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:34:15.891 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 66089 ']' 00:34:15.891 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 66089 00:34:15.891 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 66089 ']' 00:34:15.891 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 66089 00:34:15.891 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:34:15.891 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:15.891 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66089 00:34:16.151 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:16.151 killing process with pid 66089 00:34:16.151 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:16.151 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66089' 00:34:16.151 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 66089 00:34:16.151 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 66089 00:34:16.410 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:16.410 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:16.410 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:16.410 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:34:16.410 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:34:16.410 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:16.410 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:34:16.410 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:16.410 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:34:16.410 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:34:16.410 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:34:16.410 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:34:16.410 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:34:16.410 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:34:16.410 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:34:16.410 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:34:16.410 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:34:16.410 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:34:16.410 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:34:16.410 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:34:16.410 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:16.410 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:16.410 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:34:16.410 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:16.410 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:16.410 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:16.669 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:34:16.669 00:34:16.669 real 0m6.196s 00:34:16.669 user 0m19.259s 00:34:16.669 sys 0m1.797s 00:34:16.669 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:16.669 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:16.669 ************************************ 00:34:16.669 END TEST nvmf_nmic 00:34:16.669 ************************************ 00:34:16.669 13:55:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:34:16.669 13:55:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:16.669 13:55:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:16.669 13:55:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:34:16.669 ************************************ 00:34:16.669 START TEST nvmf_fio_target 00:34:16.669 ************************************ 00:34:16.669 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:34:16.669 * Looking for test storage... 00:34:16.669 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:34:16.669 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:16.669 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:16.669 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:34:16.930 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:16.930 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:16.930 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:16.930 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:16.930 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:34:16.930 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:34:16.930 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:34:16.930 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:34:16.930 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:34:16.930 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:34:16.930 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:34:16.930 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:16.930 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:34:16.930 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:34:16.930 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:16.930 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:16.930 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:34:16.930 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:34:16.930 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:16.930 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:34:16.930 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:34:16.930 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:34:16.930 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:34:16.930 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:16.930 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:34:16.930 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:34:16.930 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:16.930 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:16.930 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:34:16.930 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:16.930 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:16.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:16.930 --rc genhtml_branch_coverage=1 00:34:16.930 --rc genhtml_function_coverage=1 00:34:16.930 --rc genhtml_legend=1 00:34:16.930 --rc geninfo_all_blocks=1 00:34:16.930 --rc geninfo_unexecuted_blocks=1 00:34:16.930 00:34:16.930 ' 00:34:16.930 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:16.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:16.930 --rc genhtml_branch_coverage=1 00:34:16.930 --rc genhtml_function_coverage=1 00:34:16.930 --rc genhtml_legend=1 00:34:16.930 --rc geninfo_all_blocks=1 00:34:16.930 --rc geninfo_unexecuted_blocks=1 00:34:16.930 00:34:16.930 ' 00:34:16.930 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:16.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:16.930 --rc genhtml_branch_coverage=1 00:34:16.930 --rc genhtml_function_coverage=1 00:34:16.930 --rc genhtml_legend=1 00:34:16.930 --rc geninfo_all_blocks=1 00:34:16.930 --rc geninfo_unexecuted_blocks=1 00:34:16.930 00:34:16.930 ' 00:34:16.930 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:16.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:16.930 --rc genhtml_branch_coverage=1 00:34:16.930 --rc genhtml_function_coverage=1 00:34:16.930 --rc genhtml_legend=1 00:34:16.930 --rc geninfo_all_blocks=1 00:34:16.930 --rc geninfo_unexecuted_blocks=1 00:34:16.930 00:34:16.930 ' 00:34:16.930 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:34:16.930 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:34:16.930 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:16.930 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:16.930 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:16.930 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:16.930 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:16.930 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:16.930 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:16.930 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:16.930 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:16.930 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:16.930 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:34:16.930 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=105ec898-1662-46bd-85be-b241e399edb9 00:34:16.930 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:16.931 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:34:16.931 Cannot find device "nvmf_init_br" 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:34:16.931 Cannot find device "nvmf_init_br2" 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:34:16.931 Cannot find device "nvmf_tgt_br" 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:34:16.931 Cannot find device "nvmf_tgt_br2" 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:34:16.931 Cannot find device "nvmf_init_br" 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:34:16.931 Cannot find device "nvmf_init_br2" 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:34:16.931 Cannot find device "nvmf_tgt_br" 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:34:16.931 Cannot find device "nvmf_tgt_br2" 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:34:16.931 Cannot find device "nvmf_br" 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:34:16.931 Cannot find device "nvmf_init_if" 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:34:16.931 Cannot find device "nvmf_init_if2" 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:16.931 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:16.931 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:34:16.931 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:34:17.192 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:34:17.192 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:34:17.192 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:34:17.192 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:34:17.192 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:34:17.192 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:34:17.192 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:34:17.192 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:34:17.192 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:34:17.192 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:34:17.192 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:34:17.192 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:34:17.192 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:34:17.192 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:34:17.192 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:34:17.192 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:34:17.192 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:34:17.192 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:34:17.192 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:34:17.192 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:34:17.192 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:34:17.192 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:34:17.192 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:34:17.192 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:34:17.192 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:34:17.192 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:34:17.192 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:34:17.192 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:34:17.192 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:34:17.192 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:34:17.192 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:34:17.192 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:34:17.192 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.104 ms 00:34:17.192 00:34:17.192 --- 10.0.0.3 ping statistics --- 00:34:17.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:17.192 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:34:17.192 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:34:17.192 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:34:17.192 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.065 ms 00:34:17.192 00:34:17.192 --- 10.0.0.4 ping statistics --- 00:34:17.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:17.192 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:34:17.192 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:34:17.192 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:17.192 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:34:17.192 00:34:17.192 --- 10.0.0.1 ping statistics --- 00:34:17.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:17.192 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:34:17.192 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:34:17.192 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:17.192 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:34:17.192 00:34:17.192 --- 10.0.0.2 ping statistics --- 00:34:17.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:17.192 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:34:17.192 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:17.192 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:34:17.192 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:17.192 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:17.192 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:17.192 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:17.192 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:17.192 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:17.192 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:17.192 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:34:17.192 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:17.192 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:17.192 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:17.192 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=66414 00:34:17.192 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 66414 00:34:17.192 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 66414 ']' 00:34:17.192 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:17.192 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:17.192 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:17.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:17.192 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:17.193 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:17.193 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:34:17.452 [2024-11-20 13:55:14.569497] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:34:17.452 [2024-11-20 13:55:14.569581] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:17.452 [2024-11-20 13:55:14.709099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:17.711 [2024-11-20 13:55:14.777767] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:17.711 [2024-11-20 13:55:14.777817] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:17.711 [2024-11-20 13:55:14.777841] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:17.711 [2024-11-20 13:55:14.777847] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:17.711 [2024-11-20 13:55:14.777852] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:17.711 [2024-11-20 13:55:14.778756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:17.711 [2024-11-20 13:55:14.778832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:17.711 [2024-11-20 13:55:14.778953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:17.711 [2024-11-20 13:55:14.778955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:17.711 [2024-11-20 13:55:14.832101] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:34:18.278 13:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:18.278 13:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:34:18.278 13:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:18.278 13:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:18.278 13:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:18.278 13:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:18.278 13:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:18.536 [2024-11-20 13:55:15.781794] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:18.536 13:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:18.794 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:34:18.794 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:19.053 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:34:19.053 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:19.312 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:34:19.312 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:19.571 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:34:19.571 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:34:19.830 13:55:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:20.089 13:55:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:34:20.089 13:55:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:20.347 13:55:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:34:20.347 13:55:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:20.914 13:55:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:34:20.915 13:55:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:34:20.915 13:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:21.173 13:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:34:21.173 13:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:21.431 13:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:34:21.431 13:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:34:21.689 13:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:34:21.948 [2024-11-20 13:55:19.104731] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:34:21.948 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:34:22.205 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:34:22.463 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid=105ec898-1662-46bd-85be-b241e399edb9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:34:22.463 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:34:22.463 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:34:22.463 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:34:22.463 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:34:22.463 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:34:22.463 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:34:24.996 13:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:34:24.997 13:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:34:24.997 13:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:34:24.997 13:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:34:24.997 13:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:34:24.997 13:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:34:24.997 13:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:34:24.997 [global] 00:34:24.997 thread=1 00:34:24.997 invalidate=1 00:34:24.997 rw=write 00:34:24.997 time_based=1 00:34:24.997 runtime=1 00:34:24.997 ioengine=libaio 00:34:24.997 direct=1 00:34:24.997 bs=4096 00:34:24.997 iodepth=1 00:34:24.997 norandommap=0 00:34:24.997 numjobs=1 00:34:24.997 00:34:24.997 verify_dump=1 00:34:24.997 verify_backlog=512 00:34:24.997 verify_state_save=0 00:34:24.997 do_verify=1 00:34:24.997 verify=crc32c-intel 00:34:24.997 [job0] 00:34:24.997 filename=/dev/nvme0n1 00:34:24.997 [job1] 00:34:24.997 filename=/dev/nvme0n2 00:34:24.997 [job2] 00:34:24.997 filename=/dev/nvme0n3 00:34:24.997 [job3] 00:34:24.997 filename=/dev/nvme0n4 00:34:24.997 Could not set queue depth (nvme0n1) 00:34:24.997 Could not set queue depth (nvme0n2) 00:34:24.997 Could not set queue depth (nvme0n3) 00:34:24.997 Could not set queue depth (nvme0n4) 00:34:24.997 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:24.997 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:24.997 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:24.997 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:24.997 fio-3.35 00:34:24.997 Starting 4 threads 00:34:25.935 00:34:25.935 job0: (groupid=0, jobs=1): err= 0: pid=66597: Wed Nov 20 13:55:23 2024 00:34:25.935 read: IOPS=990, BW=3960KiB/s (4055kB/s)(3964KiB/1001msec) 00:34:25.935 slat (nsec): min=16422, max=74477, avg=30187.90, stdev=5549.47 00:34:25.935 clat (usec): min=203, max=1056, avg=539.13, stdev=164.38 00:34:25.935 lat (usec): min=224, max=1085, avg=569.32, stdev=165.21 00:34:25.935 clat percentiles (usec): 00:34:25.935 | 1.00th=[ 245], 5.00th=[ 355], 10.00th=[ 388], 20.00th=[ 416], 00:34:25.935 | 30.00th=[ 437], 40.00th=[ 457], 50.00th=[ 478], 60.00th=[ 510], 00:34:25.935 | 70.00th=[ 594], 80.00th=[ 701], 90.00th=[ 807], 95.00th=[ 857], 00:34:25.935 | 99.00th=[ 938], 99.50th=[ 996], 99.90th=[ 1057], 99.95th=[ 1057], 00:34:25.935 | 99.99th=[ 1057] 00:34:25.935 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:34:25.935 slat (usec): min=37, max=138, avg=48.63, stdev= 7.90 00:34:25.935 clat (usec): min=147, max=759, avg=368.83, stdev=101.18 00:34:25.935 lat (usec): min=188, max=805, avg=417.46, stdev=101.21 00:34:25.935 clat percentiles (usec): 00:34:25.935 | 1.00th=[ 180], 5.00th=[ 241], 10.00th=[ 281], 20.00th=[ 306], 00:34:25.935 | 30.00th=[ 318], 40.00th=[ 330], 50.00th=[ 343], 60.00th=[ 359], 00:34:25.935 | 70.00th=[ 379], 80.00th=[ 420], 90.00th=[ 545], 95.00th=[ 586], 00:34:25.935 | 99.00th=[ 652], 99.50th=[ 709], 99.90th=[ 734], 99.95th=[ 758], 00:34:25.935 | 99.99th=[ 758] 00:34:25.935 bw ( KiB/s): min= 4496, max= 4496, per=18.31%, avg=4496.00, stdev= 0.00, samples=1 00:34:25.935 iops : min= 1124, max= 1124, avg=1124.00, stdev= 0.00, samples=1 00:34:25.935 lat (usec) : 250=3.33%, 500=68.73%, 750=19.75%, 1000=8.04% 00:34:25.935 lat (msec) : 2=0.15% 00:34:25.935 cpu : usr=1.30%, sys=6.80%, ctx=2015, majf=0, minf=13 00:34:25.935 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:25.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:25.935 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:25.935 issued rwts: total=991,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:25.935 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:25.935 job1: (groupid=0, jobs=1): err= 0: pid=66598: Wed Nov 20 13:55:23 2024 00:34:25.935 read: IOPS=1896, BW=7584KiB/s (7766kB/s)(7592KiB/1001msec) 00:34:25.935 slat (usec): min=6, max=104, avg=14.18, stdev= 5.95 00:34:25.935 clat (usec): min=180, max=463, avg=266.67, stdev=36.57 00:34:25.935 lat (usec): min=191, max=496, avg=280.85, stdev=38.04 00:34:25.935 clat percentiles (usec): 00:34:25.935 | 1.00th=[ 200], 5.00th=[ 217], 10.00th=[ 225], 20.00th=[ 237], 00:34:25.935 | 30.00th=[ 245], 40.00th=[ 253], 50.00th=[ 265], 60.00th=[ 273], 00:34:25.935 | 70.00th=[ 281], 80.00th=[ 293], 90.00th=[ 314], 95.00th=[ 334], 00:34:25.935 | 99.00th=[ 375], 99.50th=[ 400], 99.90th=[ 461], 99.95th=[ 465], 00:34:25.935 | 99.99th=[ 465] 00:34:25.935 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:34:25.935 slat (nsec): min=12107, max=72600, avg=22528.35, stdev=8321.78 00:34:25.935 clat (usec): min=113, max=2418, avg=202.48, stdev=59.51 00:34:25.935 lat (usec): min=129, max=2444, avg=225.01, stdev=61.36 00:34:25.935 clat percentiles (usec): 00:34:25.935 | 1.00th=[ 139], 5.00th=[ 155], 10.00th=[ 163], 20.00th=[ 174], 00:34:25.935 | 30.00th=[ 182], 40.00th=[ 190], 50.00th=[ 198], 60.00th=[ 206], 00:34:25.935 | 70.00th=[ 215], 80.00th=[ 227], 90.00th=[ 245], 95.00th=[ 265], 00:34:25.935 | 99.00th=[ 306], 99.50th=[ 322], 99.90th=[ 343], 99.95th=[ 355], 00:34:25.935 | 99.99th=[ 2409] 00:34:25.935 bw ( KiB/s): min= 8400, max= 8400, per=34.21%, avg=8400.00, stdev= 0.00, samples=1 00:34:25.935 iops : min= 2100, max= 2100, avg=2100.00, stdev= 0.00, samples=1 00:34:25.935 lat (usec) : 250=64.47%, 500=35.50% 00:34:25.935 lat (msec) : 4=0.03% 00:34:25.935 cpu : usr=1.00%, sys=5.70%, ctx=3946, majf=0, minf=17 00:34:25.935 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:25.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:25.935 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:25.935 issued rwts: total=1898,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:25.935 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:25.935 job2: (groupid=0, jobs=1): err= 0: pid=66599: Wed Nov 20 13:55:23 2024 00:34:25.935 read: IOPS=1069, BW=4280KiB/s (4382kB/s)(4284KiB/1001msec) 00:34:25.935 slat (usec): min=9, max=267, avg=20.56, stdev=12.51 00:34:25.935 clat (usec): min=238, max=5297, avg=445.19, stdev=231.36 00:34:25.935 lat (usec): min=256, max=5310, avg=465.75, stdev=231.64 00:34:25.935 clat percentiles (usec): 00:34:25.935 | 1.00th=[ 273], 5.00th=[ 314], 10.00th=[ 347], 20.00th=[ 375], 00:34:25.935 | 30.00th=[ 396], 40.00th=[ 412], 50.00th=[ 433], 60.00th=[ 445], 00:34:25.935 | 70.00th=[ 461], 80.00th=[ 482], 90.00th=[ 515], 95.00th=[ 562], 00:34:25.935 | 99.00th=[ 693], 99.50th=[ 758], 99.90th=[ 3949], 99.95th=[ 5276], 00:34:25.935 | 99.99th=[ 5276] 00:34:25.935 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:34:25.935 slat (usec): min=15, max=115, avg=28.93, stdev=14.52 00:34:25.935 clat (usec): min=149, max=2401, avg=293.49, stdev=85.36 00:34:25.935 lat (usec): min=169, max=2435, avg=322.42, stdev=90.66 00:34:25.935 clat percentiles (usec): 00:34:25.935 | 1.00th=[ 172], 5.00th=[ 192], 10.00th=[ 208], 20.00th=[ 225], 00:34:25.935 | 30.00th=[ 243], 40.00th=[ 273], 50.00th=[ 297], 60.00th=[ 318], 00:34:25.935 | 70.00th=[ 334], 80.00th=[ 355], 90.00th=[ 371], 95.00th=[ 392], 00:34:25.935 | 99.00th=[ 441], 99.50th=[ 478], 99.90th=[ 644], 99.95th=[ 2409], 00:34:25.935 | 99.99th=[ 2409] 00:34:25.935 bw ( KiB/s): min= 6120, max= 6120, per=24.93%, avg=6120.00, stdev= 0.00, samples=1 00:34:25.935 iops : min= 1530, max= 1530, avg=1530.00, stdev= 0.00, samples=1 00:34:25.935 lat (usec) : 250=19.37%, 500=74.65%, 750=5.64%, 1000=0.12% 00:34:25.935 lat (msec) : 2=0.04%, 4=0.15%, 10=0.04% 00:34:25.935 cpu : usr=0.80%, sys=5.50%, ctx=2614, majf=0, minf=7 00:34:25.935 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:25.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:25.936 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:25.936 issued rwts: total=1071,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:25.936 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:25.936 job3: (groupid=0, jobs=1): err= 0: pid=66600: Wed Nov 20 13:55:23 2024 00:34:25.936 read: IOPS=1488, BW=5954KiB/s (6097kB/s)(5960KiB/1001msec) 00:34:25.936 slat (nsec): min=19197, max=85808, avg=31278.42, stdev=6446.72 00:34:25.936 clat (usec): min=237, max=1127, avg=321.38, stdev=44.37 00:34:25.936 lat (usec): min=267, max=1160, avg=352.66, stdev=45.23 00:34:25.936 clat percentiles (usec): 00:34:25.936 | 1.00th=[ 253], 5.00th=[ 269], 10.00th=[ 281], 20.00th=[ 289], 00:34:25.936 | 30.00th=[ 302], 40.00th=[ 310], 50.00th=[ 318], 60.00th=[ 326], 00:34:25.936 | 70.00th=[ 338], 80.00th=[ 351], 90.00th=[ 367], 95.00th=[ 379], 00:34:25.936 | 99.00th=[ 412], 99.50th=[ 433], 99.90th=[ 988], 99.95th=[ 1123], 00:34:25.936 | 99.99th=[ 1123] 00:34:25.936 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:34:25.936 slat (usec): min=23, max=179, avg=45.21, stdev=10.11 00:34:25.936 clat (usec): min=143, max=368, avg=255.69, stdev=33.86 00:34:25.936 lat (usec): min=179, max=471, avg=300.91, stdev=37.35 00:34:25.936 clat percentiles (usec): 00:34:25.936 | 1.00th=[ 186], 5.00th=[ 206], 10.00th=[ 215], 20.00th=[ 229], 00:34:25.936 | 30.00th=[ 237], 40.00th=[ 245], 50.00th=[ 253], 60.00th=[ 265], 00:34:25.936 | 70.00th=[ 273], 80.00th=[ 285], 90.00th=[ 302], 95.00th=[ 318], 00:34:25.936 | 99.00th=[ 343], 99.50th=[ 355], 99.90th=[ 367], 99.95th=[ 367], 00:34:25.936 | 99.99th=[ 367] 00:34:25.936 bw ( KiB/s): min= 8104, max= 8104, per=33.01%, avg=8104.00, stdev= 0.00, samples=1 00:34:25.936 iops : min= 2026, max= 2026, avg=2026.00, stdev= 0.00, samples=1 00:34:25.936 lat (usec) : 250=23.73%, 500=76.14%, 750=0.07%, 1000=0.03% 00:34:25.936 lat (msec) : 2=0.03% 00:34:25.936 cpu : usr=2.30%, sys=9.60%, ctx=3029, majf=0, minf=11 00:34:25.936 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:25.936 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:25.936 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:25.936 issued rwts: total=1490,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:25.936 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:25.936 00:34:25.936 Run status group 0 (all jobs): 00:34:25.936 READ: bw=21.3MiB/s (22.3MB/s), 3960KiB/s-7584KiB/s (4055kB/s-7766kB/s), io=21.3MiB (22.3MB), run=1001-1001msec 00:34:25.936 WRITE: bw=24.0MiB/s (25.1MB/s), 4092KiB/s-8184KiB/s (4190kB/s-8380kB/s), io=24.0MiB (25.2MB), run=1001-1001msec 00:34:25.936 00:34:25.936 Disk stats (read/write): 00:34:25.936 nvme0n1: ios=877/1024, merge=0/0, ticks=451/402, in_queue=853, util=90.38% 00:34:25.936 nvme0n2: ios=1585/1978, merge=0/0, ticks=457/419, in_queue=876, util=91.33% 00:34:25.936 nvme0n3: ios=1063/1176, merge=0/0, ticks=514/369, in_queue=883, util=90.62% 00:34:25.936 nvme0n4: ios=1187/1536, merge=0/0, ticks=455/412, in_queue=867, util=91.91% 00:34:25.936 13:55:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:34:25.936 [global] 00:34:25.936 thread=1 00:34:25.936 invalidate=1 00:34:25.936 rw=randwrite 00:34:25.936 time_based=1 00:34:25.936 runtime=1 00:34:25.936 ioengine=libaio 00:34:25.936 direct=1 00:34:25.936 bs=4096 00:34:25.936 iodepth=1 00:34:25.936 norandommap=0 00:34:25.936 numjobs=1 00:34:25.936 00:34:25.936 verify_dump=1 00:34:25.936 verify_backlog=512 00:34:25.936 verify_state_save=0 00:34:25.936 do_verify=1 00:34:25.936 verify=crc32c-intel 00:34:25.936 [job0] 00:34:25.936 filename=/dev/nvme0n1 00:34:25.936 [job1] 00:34:25.936 filename=/dev/nvme0n2 00:34:25.936 [job2] 00:34:25.936 filename=/dev/nvme0n3 00:34:25.936 [job3] 00:34:25.936 filename=/dev/nvme0n4 00:34:25.936 Could not set queue depth (nvme0n1) 00:34:25.936 Could not set queue depth (nvme0n2) 00:34:25.936 Could not set queue depth (nvme0n3) 00:34:25.936 Could not set queue depth (nvme0n4) 00:34:26.195 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:26.195 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:26.195 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:26.195 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:26.195 fio-3.35 00:34:26.195 Starting 4 threads 00:34:27.573 00:34:27.573 job0: (groupid=0, jobs=1): err= 0: pid=66653: Wed Nov 20 13:55:24 2024 00:34:27.573 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:34:27.573 slat (nsec): min=8886, max=69770, avg=22490.06, stdev=8122.95 00:34:27.573 clat (usec): min=157, max=482, avg=313.22, stdev=60.87 00:34:27.573 lat (usec): min=171, max=523, avg=335.72, stdev=63.64 00:34:27.573 clat percentiles (usec): 00:34:27.573 | 1.00th=[ 180], 5.00th=[ 219], 10.00th=[ 237], 20.00th=[ 260], 00:34:27.573 | 30.00th=[ 281], 40.00th=[ 297], 50.00th=[ 310], 60.00th=[ 326], 00:34:27.573 | 70.00th=[ 343], 80.00th=[ 367], 90.00th=[ 396], 95.00th=[ 420], 00:34:27.573 | 99.00th=[ 449], 99.50th=[ 469], 99.90th=[ 482], 99.95th=[ 482], 00:34:27.573 | 99.99th=[ 482] 00:34:27.573 write: IOPS=1602, BW=6410KiB/s (6563kB/s)(6416KiB/1001msec); 0 zone resets 00:34:27.573 slat (usec): min=12, max=183, avg=41.68, stdev=10.20 00:34:27.573 clat (usec): min=95, max=4193, avg=253.57, stdev=113.29 00:34:27.573 lat (usec): min=119, max=4237, avg=295.26, stdev=115.44 00:34:27.573 clat percentiles (usec): 00:34:27.573 | 1.00th=[ 123], 5.00th=[ 151], 10.00th=[ 178], 20.00th=[ 204], 00:34:27.573 | 30.00th=[ 225], 40.00th=[ 239], 50.00th=[ 253], 60.00th=[ 265], 00:34:27.573 | 70.00th=[ 281], 80.00th=[ 297], 90.00th=[ 322], 95.00th=[ 347], 00:34:27.573 | 99.00th=[ 371], 99.50th=[ 392], 99.90th=[ 545], 99.95th=[ 4178], 00:34:27.573 | 99.99th=[ 4178] 00:34:27.573 bw ( KiB/s): min= 8128, max= 8128, per=28.11%, avg=8128.00, stdev= 0.00, samples=1 00:34:27.573 iops : min= 2032, max= 2032, avg=2032.00, stdev= 0.00, samples=1 00:34:27.573 lat (usec) : 100=0.03%, 250=32.29%, 500=67.61%, 750=0.03% 00:34:27.573 lat (msec) : 10=0.03% 00:34:27.573 cpu : usr=2.60%, sys=7.90%, ctx=3143, majf=0, minf=9 00:34:27.573 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:27.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:27.573 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:27.573 issued rwts: total=1536,1604,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:27.573 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:27.573 job1: (groupid=0, jobs=1): err= 0: pid=66654: Wed Nov 20 13:55:24 2024 00:34:27.573 read: IOPS=1894, BW=7576KiB/s (7758kB/s)(7584KiB/1001msec) 00:34:27.573 slat (nsec): min=6060, max=53121, avg=10978.85, stdev=6696.22 00:34:27.573 clat (usec): min=145, max=977, avg=270.31, stdev=53.76 00:34:27.573 lat (usec): min=153, max=997, avg=281.29, stdev=55.79 00:34:27.573 clat percentiles (usec): 00:34:27.573 | 1.00th=[ 161], 5.00th=[ 186], 10.00th=[ 204], 20.00th=[ 229], 00:34:27.574 | 30.00th=[ 245], 40.00th=[ 258], 50.00th=[ 269], 60.00th=[ 281], 00:34:27.574 | 70.00th=[ 293], 80.00th=[ 314], 90.00th=[ 334], 95.00th=[ 359], 00:34:27.574 | 99.00th=[ 408], 99.50th=[ 429], 99.90th=[ 611], 99.95th=[ 979], 00:34:27.574 | 99.99th=[ 979] 00:34:27.574 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:34:27.574 slat (nsec): min=8854, max=76694, avg=16352.98, stdev=10842.06 00:34:27.574 clat (usec): min=102, max=2825, avg=208.81, stdev=91.94 00:34:27.574 lat (usec): min=111, max=2843, avg=225.17, stdev=94.73 00:34:27.574 clat percentiles (usec): 00:34:27.574 | 1.00th=[ 122], 5.00th=[ 135], 10.00th=[ 147], 20.00th=[ 163], 00:34:27.574 | 30.00th=[ 178], 40.00th=[ 192], 50.00th=[ 202], 60.00th=[ 215], 00:34:27.574 | 70.00th=[ 227], 80.00th=[ 243], 90.00th=[ 265], 95.00th=[ 285], 00:34:27.574 | 99.00th=[ 343], 99.50th=[ 465], 99.90th=[ 1483], 99.95th=[ 1762], 00:34:27.574 | 99.99th=[ 2835] 00:34:27.574 bw ( KiB/s): min= 8192, max= 8192, per=28.33%, avg=8192.00, stdev= 0.00, samples=1 00:34:27.574 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:34:27.574 lat (usec) : 250=59.74%, 500=39.98%, 750=0.10%, 1000=0.08% 00:34:27.574 lat (msec) : 2=0.08%, 4=0.03% 00:34:27.574 cpu : usr=1.00%, sys=4.60%, ctx=3944, majf=0, minf=13 00:34:27.574 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:27.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:27.574 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:27.574 issued rwts: total=1896,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:27.574 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:27.574 job2: (groupid=0, jobs=1): err= 0: pid=66655: Wed Nov 20 13:55:24 2024 00:34:27.574 read: IOPS=1417, BW=5670KiB/s (5806kB/s)(5676KiB/1001msec) 00:34:27.574 slat (nsec): min=17650, max=66440, avg=28004.03, stdev=6336.12 00:34:27.574 clat (usec): min=188, max=866, avg=325.27, stdev=59.45 00:34:27.574 lat (usec): min=217, max=885, avg=353.27, stdev=60.49 00:34:27.574 clat percentiles (usec): 00:34:27.574 | 1.00th=[ 208], 5.00th=[ 231], 10.00th=[ 249], 20.00th=[ 273], 00:34:27.574 | 30.00th=[ 289], 40.00th=[ 310], 50.00th=[ 326], 60.00th=[ 343], 00:34:27.574 | 70.00th=[ 359], 80.00th=[ 375], 90.00th=[ 404], 95.00th=[ 424], 00:34:27.574 | 99.00th=[ 453], 99.50th=[ 461], 99.90th=[ 494], 99.95th=[ 865], 00:34:27.574 | 99.99th=[ 865] 00:34:27.574 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:34:27.574 slat (usec): min=22, max=235, avg=45.40, stdev=11.38 00:34:27.574 clat (usec): min=57, max=5804, avg=272.32, stdev=187.08 00:34:27.574 lat (usec): min=167, max=5846, avg=317.73, stdev=187.34 00:34:27.574 clat percentiles (usec): 00:34:27.574 | 1.00th=[ 157], 5.00th=[ 184], 10.00th=[ 200], 20.00th=[ 223], 00:34:27.574 | 30.00th=[ 237], 40.00th=[ 249], 50.00th=[ 262], 60.00th=[ 277], 00:34:27.574 | 70.00th=[ 289], 80.00th=[ 306], 90.00th=[ 330], 95.00th=[ 351], 00:34:27.574 | 99.00th=[ 392], 99.50th=[ 420], 99.90th=[ 4146], 99.95th=[ 5800], 00:34:27.574 | 99.99th=[ 5800] 00:34:27.574 bw ( KiB/s): min= 7480, max= 7480, per=25.87%, avg=7480.00, stdev= 0.00, samples=1 00:34:27.574 iops : min= 1870, max= 1870, avg=1870.00, stdev= 0.00, samples=1 00:34:27.574 lat (usec) : 100=0.03%, 250=25.85%, 500=73.84%, 750=0.07%, 1000=0.07% 00:34:27.574 lat (msec) : 2=0.07%, 10=0.07% 00:34:27.574 cpu : usr=2.20%, sys=9.00%, ctx=2958, majf=0, minf=9 00:34:27.574 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:27.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:27.574 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:27.574 issued rwts: total=1419,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:27.574 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:27.574 job3: (groupid=0, jobs=1): err= 0: pid=66656: Wed Nov 20 13:55:24 2024 00:34:27.574 read: IOPS=1792, BW=7169KiB/s (7341kB/s)(7176KiB/1001msec) 00:34:27.574 slat (nsec): min=7894, max=54589, avg=13369.42, stdev=5292.84 00:34:27.574 clat (usec): min=159, max=2477, avg=277.91, stdev=72.51 00:34:27.574 lat (usec): min=168, max=2493, avg=291.28, stdev=73.91 00:34:27.574 clat percentiles (usec): 00:34:27.574 | 1.00th=[ 176], 5.00th=[ 198], 10.00th=[ 217], 20.00th=[ 241], 00:34:27.574 | 30.00th=[ 253], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 285], 00:34:27.574 | 70.00th=[ 297], 80.00th=[ 318], 90.00th=[ 334], 95.00th=[ 355], 00:34:27.574 | 99.00th=[ 400], 99.50th=[ 441], 99.90th=[ 799], 99.95th=[ 2474], 00:34:27.574 | 99.99th=[ 2474] 00:34:27.574 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:34:27.574 slat (usec): min=12, max=126, avg=22.63, stdev= 8.22 00:34:27.574 clat (usec): min=99, max=369, avg=207.41, stdev=41.08 00:34:27.574 lat (usec): min=115, max=422, avg=230.04, stdev=43.57 00:34:27.574 clat percentiles (usec): 00:34:27.574 | 1.00th=[ 126], 5.00th=[ 147], 10.00th=[ 159], 20.00th=[ 172], 00:34:27.574 | 30.00th=[ 182], 40.00th=[ 194], 50.00th=[ 204], 60.00th=[ 217], 00:34:27.574 | 70.00th=[ 227], 80.00th=[ 243], 90.00th=[ 262], 95.00th=[ 281], 00:34:27.574 | 99.00th=[ 314], 99.50th=[ 322], 99.90th=[ 347], 99.95th=[ 355], 00:34:27.574 | 99.99th=[ 371] 00:34:27.574 bw ( KiB/s): min= 8192, max= 8192, per=28.33%, avg=8192.00, stdev= 0.00, samples=1 00:34:27.574 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:34:27.574 lat (usec) : 100=0.03%, 250=57.99%, 500=41.88%, 750=0.03%, 1000=0.05% 00:34:27.574 lat (msec) : 4=0.03% 00:34:27.574 cpu : usr=1.30%, sys=5.50%, ctx=3843, majf=0, minf=15 00:34:27.574 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:27.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:27.574 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:27.574 issued rwts: total=1794,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:27.574 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:27.574 00:34:27.574 Run status group 0 (all jobs): 00:34:27.574 READ: bw=25.9MiB/s (27.2MB/s), 5670KiB/s-7576KiB/s (5806kB/s-7758kB/s), io=26.0MiB (27.2MB), run=1001-1001msec 00:34:27.574 WRITE: bw=28.2MiB/s (29.6MB/s), 6138KiB/s-8184KiB/s (6285kB/s-8380kB/s), io=28.3MiB (29.6MB), run=1001-1001msec 00:34:27.574 00:34:27.574 Disk stats (read/write): 00:34:27.574 nvme0n1: ios=1174/1536, merge=0/0, ticks=397/413, in_queue=810, util=88.15% 00:34:27.574 nvme0n2: ios=1551/1805, merge=0/0, ticks=454/394, in_queue=848, util=87.95% 00:34:27.574 nvme0n3: ios=1024/1503, merge=0/0, ticks=351/435, in_queue=786, util=89.36% 00:34:27.574 nvme0n4: ios=1536/1690, merge=0/0, ticks=429/366, in_queue=795, util=89.63% 00:34:27.574 13:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:34:27.574 [global] 00:34:27.574 thread=1 00:34:27.574 invalidate=1 00:34:27.574 rw=write 00:34:27.574 time_based=1 00:34:27.574 runtime=1 00:34:27.574 ioengine=libaio 00:34:27.574 direct=1 00:34:27.574 bs=4096 00:34:27.574 iodepth=128 00:34:27.574 norandommap=0 00:34:27.574 numjobs=1 00:34:27.574 00:34:27.574 verify_dump=1 00:34:27.574 verify_backlog=512 00:34:27.574 verify_state_save=0 00:34:27.574 do_verify=1 00:34:27.574 verify=crc32c-intel 00:34:27.574 [job0] 00:34:27.574 filename=/dev/nvme0n1 00:34:27.574 [job1] 00:34:27.574 filename=/dev/nvme0n2 00:34:27.574 [job2] 00:34:27.574 filename=/dev/nvme0n3 00:34:27.574 [job3] 00:34:27.574 filename=/dev/nvme0n4 00:34:27.574 Could not set queue depth (nvme0n1) 00:34:27.574 Could not set queue depth (nvme0n2) 00:34:27.574 Could not set queue depth (nvme0n3) 00:34:27.574 Could not set queue depth (nvme0n4) 00:34:27.574 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:27.574 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:27.574 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:27.574 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:27.574 fio-3.35 00:34:27.574 Starting 4 threads 00:34:28.954 00:34:28.954 job0: (groupid=0, jobs=1): err= 0: pid=66717: Wed Nov 20 13:55:25 2024 00:34:28.954 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:34:28.954 slat (usec): min=7, max=4612, avg=135.22, stdev=626.83 00:34:28.954 clat (usec): min=8667, max=21274, avg=17858.17, stdev=1478.51 00:34:28.954 lat (usec): min=8683, max=21295, avg=17993.39, stdev=1358.36 00:34:28.954 clat percentiles (usec): 00:34:28.954 | 1.00th=[13173], 5.00th=[16450], 10.00th=[16909], 20.00th=[17171], 00:34:28.954 | 30.00th=[17433], 40.00th=[17695], 50.00th=[17695], 60.00th=[18220], 00:34:28.954 | 70.00th=[18482], 80.00th=[18744], 90.00th=[19530], 95.00th=[19792], 00:34:28.954 | 99.00th=[20841], 99.50th=[21103], 99.90th=[21365], 99.95th=[21365], 00:34:28.954 | 99.99th=[21365] 00:34:28.954 write: IOPS=3606, BW=14.1MiB/s (14.8MB/s)(14.1MiB/1003msec); 0 zone resets 00:34:28.954 slat (usec): min=22, max=5509, avg=132.98, stdev=561.93 00:34:28.954 clat (usec): min=341, max=20281, avg=17275.80, stdev=1631.14 00:34:28.954 lat (usec): min=3867, max=20658, avg=17408.78, stdev=1541.53 00:34:28.954 clat percentiles (usec): 00:34:28.954 | 1.00th=[13173], 5.00th=[15664], 10.00th=[16319], 20.00th=[16712], 00:34:28.954 | 30.00th=[16909], 40.00th=[17171], 50.00th=[17433], 60.00th=[17433], 00:34:28.954 | 70.00th=[17695], 80.00th=[18220], 90.00th=[18744], 95.00th=[19268], 00:34:28.954 | 99.00th=[20055], 99.50th=[20317], 99.90th=[20317], 99.95th=[20317], 00:34:28.954 | 99.99th=[20317] 00:34:28.954 bw ( KiB/s): min=12792, max=15911, per=30.42%, avg=14351.50, stdev=2205.47, samples=2 00:34:28.954 iops : min= 3198, max= 3977, avg=3587.50, stdev=550.84, samples=2 00:34:28.954 lat (usec) : 500=0.01% 00:34:28.954 lat (msec) : 4=0.06%, 10=0.83%, 20=96.13%, 50=2.97% 00:34:28.954 cpu : usr=4.19%, sys=13.77%, ctx=266, majf=0, minf=9 00:34:28.954 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:34:28.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:28.954 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:28.954 issued rwts: total=3584,3617,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:28.954 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:28.954 job1: (groupid=0, jobs=1): err= 0: pid=66718: Wed Nov 20 13:55:25 2024 00:34:28.954 read: IOPS=2358, BW=9434KiB/s (9660kB/s)(9500KiB/1007msec) 00:34:28.954 slat (usec): min=15, max=13390, avg=195.41, stdev=1312.03 00:34:28.954 clat (usec): min=1093, max=44548, avg=26541.72, stdev=4351.34 00:34:28.954 lat (usec): min=10002, max=53142, avg=26737.13, stdev=4372.82 00:34:28.954 clat percentiles (usec): 00:34:28.954 | 1.00th=[10552], 5.00th=[16581], 10.00th=[24773], 20.00th=[25035], 00:34:28.954 | 30.00th=[26084], 40.00th=[26608], 50.00th=[26870], 60.00th=[27395], 00:34:28.954 | 70.00th=[27919], 80.00th=[28443], 90.00th=[29754], 95.00th=[31327], 00:34:28.954 | 99.00th=[42730], 99.50th=[43254], 99.90th=[44303], 99.95th=[44303], 00:34:28.954 | 99.99th=[44303] 00:34:28.954 write: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec); 0 zone resets 00:34:28.954 slat (usec): min=9, max=18848, avg=201.78, stdev=1345.49 00:34:28.954 clat (usec): min=12978, max=36161, avg=25183.69, stdev=3127.19 00:34:28.954 lat (usec): min=15888, max=36524, avg=25385.47, stdev=2913.43 00:34:28.954 clat percentiles (usec): 00:34:28.954 | 1.00th=[15795], 5.00th=[21890], 10.00th=[22152], 20.00th=[23200], 00:34:28.954 | 30.00th=[24249], 40.00th=[24773], 50.00th=[25297], 60.00th=[25560], 00:34:28.954 | 70.00th=[26084], 80.00th=[26346], 90.00th=[27919], 95.00th=[32113], 00:34:28.954 | 99.00th=[34341], 99.50th=[34341], 99.90th=[35914], 99.95th=[35914], 00:34:28.954 | 99.99th=[35914] 00:34:28.954 bw ( KiB/s): min= 9736, max=10744, per=21.71%, avg=10240.00, stdev=712.76, samples=2 00:34:28.954 iops : min= 2434, max= 2686, avg=2560.00, stdev=178.19, samples=2 00:34:28.954 lat (msec) : 2=0.02%, 20=5.43%, 50=94.55% 00:34:28.954 cpu : usr=2.19%, sys=10.04%, ctx=107, majf=0, minf=7 00:34:28.954 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:34:28.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:28.954 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:28.954 issued rwts: total=2375,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:28.954 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:28.954 job2: (groupid=0, jobs=1): err= 0: pid=66719: Wed Nov 20 13:55:25 2024 00:34:28.954 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:34:28.955 slat (usec): min=12, max=4980, avg=153.30, stdev=744.00 00:34:28.955 clat (usec): min=14327, max=22642, avg=20397.48, stdev=1051.56 00:34:28.955 lat (usec): min=18081, max=22660, avg=20550.79, stdev=749.10 00:34:28.955 clat percentiles (usec): 00:34:28.955 | 1.00th=[15926], 5.00th=[18482], 10.00th=[19530], 20.00th=[19792], 00:34:28.955 | 30.00th=[20055], 40.00th=[20317], 50.00th=[20579], 60.00th=[20579], 00:34:28.955 | 70.00th=[20841], 80.00th=[20841], 90.00th=[21365], 95.00th=[21890], 00:34:28.955 | 99.00th=[22414], 99.50th=[22676], 99.90th=[22676], 99.95th=[22676], 00:34:28.955 | 99.99th=[22676] 00:34:28.955 write: IOPS=3315, BW=13.0MiB/s (13.6MB/s)(13.0MiB/1004msec); 0 zone resets 00:34:28.955 slat (usec): min=19, max=7378, avg=149.72, stdev=654.25 00:34:28.955 clat (usec): min=377, max=23939, avg=19167.99, stdev=2251.81 00:34:28.955 lat (usec): min=4232, max=23992, avg=19317.71, stdev=2158.74 00:34:28.955 clat percentiles (usec): 00:34:28.955 | 1.00th=[ 8979], 5.00th=[16188], 10.00th=[17695], 20.00th=[18482], 00:34:28.955 | 30.00th=[18744], 40.00th=[19006], 50.00th=[19530], 60.00th=[19792], 00:34:28.955 | 70.00th=[20055], 80.00th=[20317], 90.00th=[20579], 95.00th=[21365], 00:34:28.955 | 99.00th=[23725], 99.50th=[23725], 99.90th=[23987], 99.95th=[23987], 00:34:28.955 | 99.99th=[23987] 00:34:28.955 bw ( KiB/s): min=12800, max=12833, per=27.17%, avg=12816.50, stdev=23.33, samples=2 00:34:28.955 iops : min= 3200, max= 3208, avg=3204.00, stdev= 5.66, samples=2 00:34:28.955 lat (usec) : 500=0.02% 00:34:28.955 lat (msec) : 10=0.95%, 20=46.06%, 50=52.98% 00:34:28.955 cpu : usr=2.79%, sys=13.86%, ctx=201, majf=0, minf=13 00:34:28.955 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:34:28.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:28.955 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:28.955 issued rwts: total=3072,3329,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:28.955 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:28.955 job3: (groupid=0, jobs=1): err= 0: pid=66720: Wed Nov 20 13:55:25 2024 00:34:28.955 read: IOPS=2043, BW=8176KiB/s (8372kB/s)(8192KiB/1002msec) 00:34:28.955 slat (usec): min=15, max=7999, avg=227.56, stdev=1162.60 00:34:28.955 clat (usec): min=17783, max=34133, avg=29472.20, stdev=2402.58 00:34:28.955 lat (usec): min=21969, max=34155, avg=29699.76, stdev=2129.58 00:34:28.955 clat percentiles (usec): 00:34:28.955 | 1.00th=[22152], 5.00th=[23725], 10.00th=[25822], 20.00th=[28705], 00:34:28.955 | 30.00th=[28967], 40.00th=[29492], 50.00th=[29754], 60.00th=[30278], 00:34:28.955 | 70.00th=[30540], 80.00th=[31065], 90.00th=[32113], 95.00th=[32900], 00:34:28.955 | 99.00th=[33817], 99.50th=[33817], 99.90th=[34341], 99.95th=[34341], 00:34:28.955 | 99.99th=[34341] 00:34:28.955 write: IOPS=2364, BW=9457KiB/s (9684kB/s)(9476KiB/1002msec); 0 zone resets 00:34:28.955 slat (usec): min=22, max=7885, avg=216.05, stdev=1024.59 00:34:28.955 clat (usec): min=328, max=32101, avg=27609.13, stdev=3746.89 00:34:28.955 lat (usec): min=5730, max=32151, avg=27825.18, stdev=3603.57 00:34:28.955 clat percentiles (usec): 00:34:28.955 | 1.00th=[ 6521], 5.00th=[21365], 10.00th=[25822], 20.00th=[26608], 00:34:28.955 | 30.00th=[27657], 40.00th=[28181], 50.00th=[28443], 60.00th=[28705], 00:34:28.955 | 70.00th=[28967], 80.00th=[29754], 90.00th=[30540], 95.00th=[30802], 00:34:28.955 | 99.00th=[31851], 99.50th=[31851], 99.90th=[32113], 99.95th=[32113], 00:34:28.955 | 99.99th=[32113] 00:34:28.955 bw ( KiB/s): min= 8216, max= 9728, per=19.02%, avg=8972.00, stdev=1069.15, samples=2 00:34:28.955 iops : min= 2054, max= 2432, avg=2243.00, stdev=267.29, samples=2 00:34:28.955 lat (usec) : 500=0.02% 00:34:28.955 lat (msec) : 10=0.72%, 20=1.27%, 50=97.99% 00:34:28.955 cpu : usr=2.50%, sys=9.29%, ctx=140, majf=0, minf=14 00:34:28.955 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:34:28.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:28.955 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:28.955 issued rwts: total=2048,2369,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:28.955 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:28.955 00:34:28.955 Run status group 0 (all jobs): 00:34:28.955 READ: bw=43.0MiB/s (45.1MB/s), 8176KiB/s-14.0MiB/s (8372kB/s-14.6MB/s), io=43.3MiB (45.4MB), run=1002-1007msec 00:34:28.955 WRITE: bw=46.1MiB/s (48.3MB/s), 9457KiB/s-14.1MiB/s (9684kB/s-14.8MB/s), io=46.4MiB (48.6MB), run=1002-1007msec 00:34:28.955 00:34:28.955 Disk stats (read/write): 00:34:28.955 nvme0n1: ios=3122/3264, merge=0/0, ticks=12422/12190, in_queue=24612, util=90.19% 00:34:28.955 nvme0n2: ios=2097/2176, merge=0/0, ticks=53394/51219, in_queue=104613, util=90.64% 00:34:28.955 nvme0n3: ios=2605/3072, merge=0/0, ticks=11719/12696, in_queue=24415, util=90.64% 00:34:28.955 nvme0n4: ios=1830/2048, merge=0/0, ticks=12739/13221, in_queue=25960, util=91.33% 00:34:28.955 13:55:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:34:28.955 [global] 00:34:28.955 thread=1 00:34:28.955 invalidate=1 00:34:28.955 rw=randwrite 00:34:28.955 time_based=1 00:34:28.955 runtime=1 00:34:28.955 ioengine=libaio 00:34:28.955 direct=1 00:34:28.955 bs=4096 00:34:28.955 iodepth=128 00:34:28.955 norandommap=0 00:34:28.955 numjobs=1 00:34:28.955 00:34:28.955 verify_dump=1 00:34:28.955 verify_backlog=512 00:34:28.955 verify_state_save=0 00:34:28.955 do_verify=1 00:34:28.955 verify=crc32c-intel 00:34:28.955 [job0] 00:34:28.955 filename=/dev/nvme0n1 00:34:28.955 [job1] 00:34:28.955 filename=/dev/nvme0n2 00:34:28.955 [job2] 00:34:28.955 filename=/dev/nvme0n3 00:34:28.955 [job3] 00:34:28.955 filename=/dev/nvme0n4 00:34:28.955 Could not set queue depth (nvme0n1) 00:34:28.955 Could not set queue depth (nvme0n2) 00:34:28.955 Could not set queue depth (nvme0n3) 00:34:28.955 Could not set queue depth (nvme0n4) 00:34:28.955 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:28.955 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:28.955 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:28.955 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:28.955 fio-3.35 00:34:28.955 Starting 4 threads 00:34:30.339 00:34:30.339 job0: (groupid=0, jobs=1): err= 0: pid=66777: Wed Nov 20 13:55:27 2024 00:34:30.339 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:34:30.339 slat (usec): min=3, max=14256, avg=150.96, stdev=1055.40 00:34:30.339 clat (usec): min=9888, max=35170, avg=20553.25, stdev=3456.19 00:34:30.339 lat (usec): min=9908, max=40999, avg=20704.21, stdev=3486.38 00:34:30.339 clat percentiles (usec): 00:34:30.339 | 1.00th=[12125], 5.00th=[15401], 10.00th=[16581], 20.00th=[17433], 00:34:30.339 | 30.00th=[18482], 40.00th=[19792], 50.00th=[21103], 60.00th=[21365], 00:34:30.339 | 70.00th=[22152], 80.00th=[23200], 90.00th=[23725], 95.00th=[25035], 00:34:30.339 | 99.00th=[30802], 99.50th=[32900], 99.90th=[35390], 99.95th=[35390], 00:34:30.339 | 99.99th=[35390] 00:34:30.339 write: IOPS=3542, BW=13.8MiB/s (14.5MB/s)(13.9MiB/1004msec); 0 zone resets 00:34:30.339 slat (usec): min=5, max=14033, avg=144.40, stdev=967.06 00:34:30.339 clat (usec): min=3727, max=28943, avg=18006.22, stdev=3415.95 00:34:30.339 lat (usec): min=3756, max=29248, avg=18150.61, stdev=3324.91 00:34:30.339 clat percentiles (usec): 00:34:30.340 | 1.00th=[ 4948], 5.00th=[12911], 10.00th=[14091], 20.00th=[15401], 00:34:30.340 | 30.00th=[16188], 40.00th=[17695], 50.00th=[18220], 60.00th=[19006], 00:34:30.340 | 70.00th=[19530], 80.00th=[20055], 90.00th=[21365], 95.00th=[23200], 00:34:30.340 | 99.00th=[26608], 99.50th=[26870], 99.90th=[28967], 99.95th=[28967], 00:34:30.340 | 99.99th=[28967] 00:34:30.340 bw ( KiB/s): min=12761, max=14704, per=29.13%, avg=13732.50, stdev=1373.91, samples=2 00:34:30.340 iops : min= 3190, max= 3676, avg=3433.00, stdev=343.65, samples=2 00:34:30.340 lat (msec) : 4=0.14%, 10=0.80%, 20=58.61%, 50=40.46% 00:34:30.340 cpu : usr=2.69%, sys=8.57%, ctx=140, majf=0, minf=5 00:34:30.340 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:34:30.340 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.340 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:30.340 issued rwts: total=3072,3557,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:30.340 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:30.340 job1: (groupid=0, jobs=1): err= 0: pid=66778: Wed Nov 20 13:55:27 2024 00:34:30.340 read: IOPS=3189, BW=12.5MiB/s (13.1MB/s)(12.5MiB/1003msec) 00:34:30.340 slat (usec): min=3, max=15981, avg=148.09, stdev=1081.21 00:34:30.340 clat (usec): min=1258, max=40330, avg=19955.10, stdev=3975.25 00:34:30.340 lat (usec): min=8361, max=40356, avg=20103.19, stdev=4014.21 00:34:30.340 clat percentiles (usec): 00:34:30.340 | 1.00th=[ 8979], 5.00th=[13042], 10.00th=[15139], 20.00th=[16712], 00:34:30.340 | 30.00th=[18482], 40.00th=[19268], 50.00th=[20317], 60.00th=[21365], 00:34:30.340 | 70.00th=[22414], 80.00th=[22676], 90.00th=[23987], 95.00th=[26346], 00:34:30.341 | 99.00th=[29754], 99.50th=[30016], 99.90th=[32900], 99.95th=[34341], 00:34:30.341 | 99.99th=[40109] 00:34:30.341 write: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec); 0 zone resets 00:34:30.341 slat (usec): min=5, max=13910, avg=142.38, stdev=988.98 00:34:30.341 clat (usec): min=7104, max=26389, avg=17619.89, stdev=2773.38 00:34:30.341 lat (usec): min=10716, max=26449, avg=17762.27, stdev=2641.17 00:34:30.341 clat percentiles (usec): 00:34:30.341 | 1.00th=[10683], 5.00th=[13304], 10.00th=[14353], 20.00th=[15401], 00:34:30.341 | 30.00th=[16319], 40.00th=[16909], 50.00th=[17433], 60.00th=[18482], 00:34:30.341 | 70.00th=[18744], 80.00th=[20055], 90.00th=[20579], 95.00th=[21365], 00:34:30.341 | 99.00th=[26084], 99.50th=[26084], 99.90th=[26346], 99.95th=[26346], 00:34:30.341 | 99.99th=[26346] 00:34:30.341 bw ( KiB/s): min=13330, max=15360, per=30.43%, avg=14345.00, stdev=1435.43, samples=2 00:34:30.341 iops : min= 3332, max= 3840, avg=3586.00, stdev=359.21, samples=2 00:34:30.341 lat (msec) : 2=0.01%, 10=1.50%, 20=63.08%, 50=35.40% 00:34:30.341 cpu : usr=2.00%, sys=6.59%, ctx=136, majf=0, minf=3 00:34:30.341 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:34:30.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.341 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:30.341 issued rwts: total=3199,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:30.341 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:30.341 job2: (groupid=0, jobs=1): err= 0: pid=66779: Wed Nov 20 13:55:27 2024 00:34:30.341 read: IOPS=2037, BW=8151KiB/s (8347kB/s)(8192KiB/1005msec) 00:34:30.341 slat (usec): min=4, max=8874, avg=224.61, stdev=1170.95 00:34:30.341 clat (usec): min=18748, max=34653, avg=29461.79, stdev=2474.68 00:34:30.341 lat (usec): min=24460, max=35541, avg=29686.40, stdev=2195.81 00:34:30.341 clat percentiles (usec): 00:34:30.341 | 1.00th=[22152], 5.00th=[25822], 10.00th=[26608], 20.00th=[27657], 00:34:30.341 | 30.00th=[28181], 40.00th=[28705], 50.00th=[29492], 60.00th=[30278], 00:34:30.341 | 70.00th=[31065], 80.00th=[31589], 90.00th=[32900], 95.00th=[32900], 00:34:30.341 | 99.00th=[34341], 99.50th=[34341], 99.90th=[34866], 99.95th=[34866], 00:34:30.341 | 99.99th=[34866] 00:34:30.341 write: IOPS=2325, BW=9301KiB/s (9525kB/s)(9348KiB/1005msec); 0 zone resets 00:34:30.341 slat (usec): min=6, max=8621, avg=224.84, stdev=1121.51 00:34:30.341 clat (usec): min=978, max=35529, avg=28311.59, stdev=3893.72 00:34:30.341 lat (usec): min=9052, max=35538, avg=28536.43, stdev=3740.48 00:34:30.341 clat percentiles (usec): 00:34:30.341 | 1.00th=[ 9241], 5.00th=[23725], 10.00th=[25822], 20.00th=[26346], 00:34:30.342 | 30.00th=[26870], 40.00th=[27395], 50.00th=[28443], 60.00th=[28967], 00:34:30.342 | 70.00th=[29754], 80.00th=[31589], 90.00th=[33162], 95.00th=[33424], 00:34:30.342 | 99.00th=[35390], 99.50th=[35390], 99.90th=[35390], 99.95th=[35390], 00:34:30.342 | 99.99th=[35390] 00:34:30.342 bw ( KiB/s): min= 8200, max= 9472, per=18.74%, avg=8836.00, stdev=899.44, samples=2 00:34:30.342 iops : min= 2050, max= 2368, avg=2209.00, stdev=224.86, samples=2 00:34:30.342 lat (usec) : 1000=0.02% 00:34:30.342 lat (msec) : 10=0.73%, 20=0.94%, 50=98.31% 00:34:30.342 cpu : usr=1.89%, sys=7.17%, ctx=139, majf=0, minf=5 00:34:30.342 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:34:30.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.342 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:30.342 issued rwts: total=2048,2337,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:30.342 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:30.342 job3: (groupid=0, jobs=1): err= 0: pid=66781: Wed Nov 20 13:55:27 2024 00:34:30.342 read: IOPS=2037, BW=8151KiB/s (8347kB/s)(8192KiB/1005msec) 00:34:30.342 slat (usec): min=4, max=17088, avg=223.49, stdev=1570.88 00:34:30.342 clat (usec): min=13896, max=50063, avg=30525.32, stdev=4433.23 00:34:30.342 lat (usec): min=13901, max=61082, avg=30748.81, stdev=4495.48 00:34:30.342 clat percentiles (usec): 00:34:30.342 | 1.00th=[16319], 5.00th=[23462], 10.00th=[24511], 20.00th=[27395], 00:34:30.342 | 30.00th=[28967], 40.00th=[30802], 50.00th=[31327], 60.00th=[32375], 00:34:30.342 | 70.00th=[32637], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:34:30.342 | 99.00th=[43254], 99.50th=[48497], 99.90th=[50070], 99.95th=[50070], 00:34:30.342 | 99.99th=[50070] 00:34:30.342 write: IOPS=2355, BW=9421KiB/s (9647kB/s)(9468KiB/1005msec); 0 zone resets 00:34:30.342 slat (usec): min=7, max=17553, avg=223.17, stdev=1528.25 00:34:30.342 clat (usec): min=748, max=42873, avg=27352.52, stdev=4582.83 00:34:30.342 lat (usec): min=14543, max=43210, avg=27575.69, stdev=4386.80 00:34:30.342 clat percentiles (usec): 00:34:30.342 | 1.00th=[14877], 5.00th=[20055], 10.00th=[21890], 20.00th=[23725], 00:34:30.342 | 30.00th=[24249], 40.00th=[26346], 50.00th=[27919], 60.00th=[28967], 00:34:30.342 | 70.00th=[30016], 80.00th=[31589], 90.00th=[32900], 95.00th=[33162], 00:34:30.343 | 99.00th=[34341], 99.50th=[39060], 99.90th=[42730], 99.95th=[42730], 00:34:30.343 | 99.99th=[42730] 00:34:30.343 bw ( KiB/s): min= 8777, max= 9152, per=19.01%, avg=8964.50, stdev=265.17, samples=2 00:34:30.343 iops : min= 2194, max= 2288, avg=2241.00, stdev=66.47, samples=2 00:34:30.343 lat (usec) : 750=0.02% 00:34:30.343 lat (msec) : 20=4.03%, 50=95.88%, 100=0.07% 00:34:30.343 cpu : usr=1.79%, sys=7.37%, ctx=88, majf=0, minf=8 00:34:30.343 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:34:30.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.343 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:30.343 issued rwts: total=2048,2367,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:30.343 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:30.343 00:34:30.343 Run status group 0 (all jobs): 00:34:30.343 READ: bw=40.3MiB/s (42.3MB/s), 8151KiB/s-12.5MiB/s (8347kB/s-13.1MB/s), io=40.5MiB (42.5MB), run=1003-1005msec 00:34:30.343 WRITE: bw=46.0MiB/s (48.3MB/s), 9301KiB/s-14.0MiB/s (9525kB/s-14.6MB/s), io=46.3MiB (48.5MB), run=1003-1005msec 00:34:30.343 00:34:30.343 Disk stats (read/write): 00:34:30.343 nvme0n1: ios=2610/3008, merge=0/0, ticks=51954/53260, in_queue=105214, util=89.68% 00:34:30.343 nvme0n2: ios=2735/3072, merge=0/0, ticks=54062/53236, in_queue=107298, util=90.84% 00:34:30.343 nvme0n3: ios=1825/2048, merge=0/0, ticks=12653/13682, in_queue=26335, util=90.64% 00:34:30.343 nvme0n4: ios=1766/2048, merge=0/0, ticks=51521/53661, in_queue=105182, util=90.41% 00:34:30.343 13:55:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:34:30.343 13:55:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=66794 00:34:30.343 13:55:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:34:30.343 13:55:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:34:30.343 [global] 00:34:30.343 thread=1 00:34:30.343 invalidate=1 00:34:30.343 rw=read 00:34:30.343 time_based=1 00:34:30.343 runtime=10 00:34:30.343 ioengine=libaio 00:34:30.343 direct=1 00:34:30.343 bs=4096 00:34:30.343 iodepth=1 00:34:30.343 norandommap=1 00:34:30.343 numjobs=1 00:34:30.343 00:34:30.343 [job0] 00:34:30.343 filename=/dev/nvme0n1 00:34:30.343 [job1] 00:34:30.343 filename=/dev/nvme0n2 00:34:30.343 [job2] 00:34:30.343 filename=/dev/nvme0n3 00:34:30.343 [job3] 00:34:30.343 filename=/dev/nvme0n4 00:34:30.343 Could not set queue depth (nvme0n1) 00:34:30.343 Could not set queue depth (nvme0n2) 00:34:30.343 Could not set queue depth (nvme0n3) 00:34:30.343 Could not set queue depth (nvme0n4) 00:34:30.343 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:30.343 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:30.343 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:30.343 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:30.343 fio-3.35 00:34:30.343 Starting 4 threads 00:34:33.674 13:55:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:34:33.674 fio: pid=66837, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:33.674 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=34942976, buflen=4096 00:34:33.674 13:55:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:34:33.674 fio: pid=66836, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:33.674 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=27410432, buflen=4096 00:34:33.674 13:55:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:33.674 13:55:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:34:33.931 fio: pid=66834, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:33.931 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=28258304, buflen=4096 00:34:33.932 13:55:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:33.932 13:55:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:34:34.188 fio: pid=66835, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:34.188 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=60481536, buflen=4096 00:34:34.188 00:34:34.188 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66834: Wed Nov 20 13:55:31 2024 00:34:34.188 read: IOPS=2059, BW=8238KiB/s (8435kB/s)(26.9MiB/3350msec) 00:34:34.188 slat (usec): min=6, max=15602, avg=29.61, stdev=326.41 00:34:34.188 clat (usec): min=115, max=120795, avg=453.65, stdev=1458.77 00:34:34.188 lat (usec): min=124, max=120807, avg=483.26, stdev=1494.75 00:34:34.188 clat percentiles (usec): 00:34:34.188 | 1.00th=[ 141], 5.00th=[ 176], 10.00th=[ 204], 20.00th=[ 351], 00:34:34.188 | 30.00th=[ 424], 40.00th=[ 445], 50.00th=[ 461], 60.00th=[ 474], 00:34:34.188 | 70.00th=[ 494], 80.00th=[ 515], 90.00th=[ 553], 95.00th=[ 611], 00:34:34.188 | 99.00th=[ 791], 99.50th=[ 816], 99.90th=[ 1188], 99.95th=[ 3359], 00:34:34.188 | 99.99th=[121111] 00:34:34.188 bw ( KiB/s): min= 7272, max= 8008, per=18.98%, avg=7772.00, stdev=272.79, samples=6 00:34:34.188 iops : min= 1818, max= 2002, avg=1943.00, stdev=68.20, samples=6 00:34:34.188 lat (usec) : 250=16.97%, 500=56.93%, 750=24.26%, 1000=1.67% 00:34:34.188 lat (msec) : 2=0.09%, 4=0.03%, 10=0.03%, 250=0.01% 00:34:34.188 cpu : usr=0.93%, sys=4.09%, ctx=6908, majf=0, minf=1 00:34:34.188 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:34.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:34.188 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:34.188 issued rwts: total=6900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:34.188 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:34.188 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66835: Wed Nov 20 13:55:31 2024 00:34:34.188 read: IOPS=4097, BW=16.0MiB/s (16.8MB/s)(57.7MiB/3604msec) 00:34:34.188 slat (usec): min=5, max=15369, avg=13.08, stdev=224.89 00:34:34.188 clat (usec): min=96, max=22767, avg=230.07, stdev=195.27 00:34:34.188 lat (usec): min=103, max=22785, avg=243.15, stdev=298.86 00:34:34.188 clat percentiles (usec): 00:34:34.188 | 1.00th=[ 109], 5.00th=[ 122], 10.00th=[ 159], 20.00th=[ 198], 00:34:34.188 | 30.00th=[ 212], 40.00th=[ 221], 50.00th=[ 231], 60.00th=[ 239], 00:34:34.188 | 70.00th=[ 249], 80.00th=[ 265], 90.00th=[ 285], 95.00th=[ 306], 00:34:34.188 | 99.00th=[ 351], 99.50th=[ 371], 99.90th=[ 693], 99.95th=[ 1123], 00:34:34.188 | 99.99th=[ 2147] 00:34:34.188 bw ( KiB/s): min=14608, max=17592, per=39.23%, avg=16061.33, stdev=1184.12, samples=6 00:34:34.188 iops : min= 3652, max= 4398, avg=4015.33, stdev=296.03, samples=6 00:34:34.188 lat (usec) : 100=0.04%, 250=70.43%, 500=29.38%, 750=0.05%, 1000=0.03% 00:34:34.188 lat (msec) : 2=0.04%, 4=0.01%, 50=0.01% 00:34:34.188 cpu : usr=0.80%, sys=3.41%, ctx=14774, majf=0, minf=1 00:34:34.188 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:34.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:34.188 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:34.188 issued rwts: total=14767,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:34.188 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:34.188 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66836: Wed Nov 20 13:55:31 2024 00:34:34.188 read: IOPS=2151, BW=8604KiB/s (8811kB/s)(26.1MiB/3111msec) 00:34:34.188 slat (usec): min=6, max=11602, avg=29.49, stdev=198.17 00:34:34.188 clat (usec): min=133, max=3634, avg=432.57, stdev=114.07 00:34:34.188 lat (usec): min=140, max=11975, avg=462.06, stdev=229.55 00:34:34.188 clat percentiles (usec): 00:34:34.188 | 1.00th=[ 180], 5.00th=[ 206], 10.00th=[ 243], 20.00th=[ 379], 00:34:34.188 | 30.00th=[ 420], 40.00th=[ 437], 50.00th=[ 449], 60.00th=[ 465], 00:34:34.188 | 70.00th=[ 482], 80.00th=[ 502], 90.00th=[ 529], 95.00th=[ 562], 00:34:34.188 | 99.00th=[ 619], 99.50th=[ 644], 99.90th=[ 1188], 99.95th=[ 1614], 00:34:34.188 | 99.99th=[ 3621] 00:34:34.188 bw ( KiB/s): min= 7968, max= 9000, per=20.25%, avg=8290.67, stdev=375.63, samples=6 00:34:34.188 iops : min= 1992, max= 2250, avg=2072.67, stdev=93.91, samples=6 00:34:34.188 lat (usec) : 250=10.65%, 500=68.31%, 750=20.83%, 1000=0.03% 00:34:34.188 lat (msec) : 2=0.13%, 4=0.03% 00:34:34.188 cpu : usr=0.87%, sys=5.43%, ctx=6695, majf=0, minf=2 00:34:34.188 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:34.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:34.188 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:34.188 issued rwts: total=6693,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:34.188 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:34.188 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66837: Wed Nov 20 13:55:31 2024 00:34:34.188 read: IOPS=2951, BW=11.5MiB/s (12.1MB/s)(33.3MiB/2891msec) 00:34:34.188 slat (nsec): min=7426, max=77103, avg=21696.33, stdev=8297.04 00:34:34.188 clat (usec): min=197, max=3747, avg=314.55, stdev=73.97 00:34:34.188 lat (usec): min=208, max=3777, avg=336.24, stdev=75.65 00:34:34.188 clat percentiles (usec): 00:34:34.188 | 1.00th=[ 227], 5.00th=[ 245], 10.00th=[ 258], 20.00th=[ 273], 00:34:34.188 | 30.00th=[ 289], 40.00th=[ 302], 50.00th=[ 314], 60.00th=[ 326], 00:34:34.188 | 70.00th=[ 338], 80.00th=[ 351], 90.00th=[ 371], 95.00th=[ 388], 00:34:34.188 | 99.00th=[ 420], 99.50th=[ 441], 99.90th=[ 816], 99.95th=[ 1221], 00:34:34.188 | 99.99th=[ 3752] 00:34:34.188 bw ( KiB/s): min=11456, max=12616, per=28.91%, avg=11836.80, stdev=470.49, samples=5 00:34:34.188 iops : min= 2864, max= 3154, avg=2959.20, stdev=117.62, samples=5 00:34:34.188 lat (usec) : 250=7.21%, 500=92.62%, 750=0.06%, 1000=0.04% 00:34:34.188 lat (msec) : 2=0.04%, 4=0.04% 00:34:34.188 cpu : usr=1.28%, sys=6.02%, ctx=8533, majf=0, minf=2 00:34:34.188 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:34.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:34.188 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:34.188 issued rwts: total=8532,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:34.188 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:34.188 00:34:34.188 Run status group 0 (all jobs): 00:34:34.188 READ: bw=40.0MiB/s (41.9MB/s), 8238KiB/s-16.0MiB/s (8435kB/s-16.8MB/s), io=144MiB (151MB), run=2891-3604msec 00:34:34.188 00:34:34.188 Disk stats (read/write): 00:34:34.188 nvme0n1: ios=5850/0, merge=0/0, ticks=2951/0, in_queue=2951, util=94.95% 00:34:34.188 nvme0n2: ios=13258/0, merge=0/0, ticks=3210/0, in_queue=3210, util=95.24% 00:34:34.188 nvme0n3: ios=5847/0, merge=0/0, ticks=2727/0, in_queue=2727, util=96.58% 00:34:34.188 nvme0n4: ios=8518/0, merge=0/0, ticks=2710/0, in_queue=2710, util=96.78% 00:34:34.188 13:55:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:34.188 13:55:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:34:34.446 13:55:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:34.446 13:55:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:34:34.704 13:55:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:34.704 13:55:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:34:34.961 13:55:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:34.961 13:55:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:34:35.219 13:55:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:35.219 13:55:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:34:35.219 13:55:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:34:35.219 13:55:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 66794 00:34:35.219 13:55:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:34:35.219 13:55:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:35.477 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:35.477 13:55:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:35.478 13:55:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:34:35.478 13:55:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:35.478 13:55:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:34:35.478 13:55:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:34:35.478 13:55:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:35.478 nvmf hotplug test: fio failed as expected 00:34:35.478 13:55:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:34:35.478 13:55:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:34:35.478 13:55:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:34:35.478 13:55:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:35.478 13:55:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:34:35.478 13:55:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:34:35.478 13:55:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:34:35.737 13:55:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:34:35.737 13:55:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:34:35.737 13:55:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:35.737 13:55:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:34:35.737 13:55:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:35.737 13:55:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:34:35.737 13:55:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:35.737 13:55:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:35.737 rmmod nvme_tcp 00:34:35.737 rmmod nvme_fabrics 00:34:35.737 rmmod nvme_keyring 00:34:35.737 13:55:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:35.737 13:55:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:34:35.737 13:55:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:34:35.737 13:55:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 66414 ']' 00:34:35.737 13:55:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 66414 00:34:35.737 13:55:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 66414 ']' 00:34:35.737 13:55:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 66414 00:34:35.737 13:55:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:34:35.737 13:55:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:35.737 13:55:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66414 00:34:35.737 13:55:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:35.737 13:55:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:35.737 killing process with pid 66414 00:34:35.737 13:55:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66414' 00:34:35.737 13:55:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 66414 00:34:35.737 13:55:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 66414 00:34:35.998 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:35.998 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:35.998 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:35.998 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:34:35.998 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:34:35.998 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:34:35.998 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:35.998 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:35.998 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:34:35.998 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:34:35.998 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:34:35.998 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:34:35.998 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:34:35.998 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:34:35.998 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:34:35.998 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:34:35.998 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:34:35.998 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:34:35.999 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:34:36.259 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:34:36.259 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:36.259 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:36.259 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:34:36.259 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:36.259 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:36.259 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:36.259 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:34:36.259 00:34:36.259 real 0m19.608s 00:34:36.259 user 1m15.108s 00:34:36.259 sys 0m7.720s 00:34:36.259 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:36.259 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:36.259 ************************************ 00:34:36.259 END TEST nvmf_fio_target 00:34:36.259 ************************************ 00:34:36.259 13:55:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:34:36.259 13:55:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:36.259 13:55:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:36.259 13:55:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:34:36.259 ************************************ 00:34:36.259 START TEST nvmf_bdevio 00:34:36.259 ************************************ 00:34:36.259 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:34:36.520 * Looking for test storage... 00:34:36.520 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:34:36.520 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:36.520 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:34:36.520 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:36.520 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:36.520 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:36.520 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:36.520 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:36.520 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:34:36.520 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:34:36.520 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:34:36.520 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:34:36.520 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:34:36.520 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:34:36.520 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:34:36.520 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:36.520 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:34:36.520 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:34:36.520 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:36.520 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:36.520 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:34:36.520 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:34:36.520 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:36.520 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:34:36.520 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:34:36.520 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:34:36.520 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:34:36.520 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:36.520 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:34:36.520 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:34:36.520 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:36.520 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:36.520 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:34:36.520 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:36.520 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:36.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:36.520 --rc genhtml_branch_coverage=1 00:34:36.520 --rc genhtml_function_coverage=1 00:34:36.520 --rc genhtml_legend=1 00:34:36.520 --rc geninfo_all_blocks=1 00:34:36.520 --rc geninfo_unexecuted_blocks=1 00:34:36.520 00:34:36.520 ' 00:34:36.520 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:36.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:36.520 --rc genhtml_branch_coverage=1 00:34:36.520 --rc genhtml_function_coverage=1 00:34:36.520 --rc genhtml_legend=1 00:34:36.520 --rc geninfo_all_blocks=1 00:34:36.520 --rc geninfo_unexecuted_blocks=1 00:34:36.520 00:34:36.520 ' 00:34:36.520 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:36.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:36.520 --rc genhtml_branch_coverage=1 00:34:36.520 --rc genhtml_function_coverage=1 00:34:36.520 --rc genhtml_legend=1 00:34:36.520 --rc geninfo_all_blocks=1 00:34:36.520 --rc geninfo_unexecuted_blocks=1 00:34:36.520 00:34:36.520 ' 00:34:36.520 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:36.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:36.520 --rc genhtml_branch_coverage=1 00:34:36.520 --rc genhtml_function_coverage=1 00:34:36.520 --rc genhtml_legend=1 00:34:36.520 --rc geninfo_all_blocks=1 00:34:36.520 --rc geninfo_unexecuted_blocks=1 00:34:36.520 00:34:36.520 ' 00:34:36.520 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:34:36.520 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:34:36.520 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:36.520 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:36.520 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:36.520 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:36.520 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:36.520 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:36.520 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:36.520 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:36.520 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:36.520 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:36.520 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:34:36.520 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=105ec898-1662-46bd-85be-b241e399edb9 00:34:36.520 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:36.520 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:36.520 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:34:36.520 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:36.520 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:36.520 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:34:36.520 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:36.520 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:36.520 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:36.521 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:36.521 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:36.521 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:36.521 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:34:36.521 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:36.521 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:34:36.521 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:36.521 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:36.521 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:36.521 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:36.521 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:36.521 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:36.521 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:36.521 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:36.521 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:36.521 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:36.521 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:36.521 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:36.521 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:34:36.521 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:36.521 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:36.521 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:36.521 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:36.521 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:36.521 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:36.521 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:36.521 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:36.521 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:34:36.521 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:34:36.521 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:34:36.521 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:34:36.521 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:34:36.521 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:34:36.521 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:36.521 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:34:36.521 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:34:36.521 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:34:36.521 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:36.521 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:34:36.521 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:34:36.521 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:34:36.521 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:34:36.521 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:34:36.521 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:34:36.521 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:36.521 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:34:36.521 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:34:36.521 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:34:36.521 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:34:36.521 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:34:36.521 Cannot find device "nvmf_init_br" 00:34:36.521 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:34:36.521 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:34:36.521 Cannot find device "nvmf_init_br2" 00:34:36.521 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:34:36.521 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:34:36.521 Cannot find device "nvmf_tgt_br" 00:34:36.521 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:34:36.521 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:34:36.521 Cannot find device "nvmf_tgt_br2" 00:34:36.521 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:34:36.521 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:34:36.521 Cannot find device "nvmf_init_br" 00:34:36.522 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:34:36.522 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:34:36.522 Cannot find device "nvmf_init_br2" 00:34:36.522 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:34:36.522 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:34:36.781 Cannot find device "nvmf_tgt_br" 00:34:36.781 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:34:36.781 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:34:36.781 Cannot find device "nvmf_tgt_br2" 00:34:36.781 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:34:36.781 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:34:36.781 Cannot find device "nvmf_br" 00:34:36.781 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:34:36.781 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:34:36.781 Cannot find device "nvmf_init_if" 00:34:36.781 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:34:36.781 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:34:36.781 Cannot find device "nvmf_init_if2" 00:34:36.781 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:34:36.781 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:36.781 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:36.781 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:34:36.781 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:36.781 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:36.781 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:34:36.781 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:34:36.781 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:34:36.781 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:34:36.781 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:34:36.781 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:34:36.781 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:34:36.781 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:34:36.781 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:34:36.781 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:34:36.781 13:55:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:34:36.781 13:55:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:34:36.781 13:55:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:34:36.781 13:55:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:34:36.781 13:55:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:34:36.781 13:55:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:34:36.781 13:55:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:34:36.781 13:55:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:34:36.781 13:55:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:34:36.781 13:55:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:34:36.781 13:55:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:34:36.781 13:55:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:34:36.781 13:55:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:34:36.781 13:55:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:34:36.781 13:55:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:34:36.781 13:55:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:34:36.781 13:55:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:34:36.781 13:55:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:34:36.781 13:55:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:34:36.781 13:55:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:34:36.781 13:55:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:34:36.781 13:55:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:34:36.781 13:55:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:34:36.781 13:55:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:34:37.039 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:34:37.039 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:34:37.039 00:34:37.039 --- 10.0.0.3 ping statistics --- 00:34:37.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:37.039 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:34:37.039 13:55:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:34:37.039 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:34:37.039 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.108 ms 00:34:37.039 00:34:37.039 --- 10.0.0.4 ping statistics --- 00:34:37.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:37.039 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:34:37.039 13:55:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:34:37.039 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:37.039 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:34:37.039 00:34:37.039 --- 10.0.0.1 ping statistics --- 00:34:37.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:37.039 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:34:37.039 13:55:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:34:37.039 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:37.039 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:34:37.039 00:34:37.039 --- 10.0.0.2 ping statistics --- 00:34:37.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:37.039 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:34:37.040 13:55:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:37.040 13:55:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:34:37.040 13:55:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:37.040 13:55:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:37.040 13:55:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:37.040 13:55:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:37.040 13:55:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:37.040 13:55:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:37.040 13:55:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:37.040 13:55:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:34:37.040 13:55:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:37.040 13:55:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:37.040 13:55:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:37.040 13:55:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=67156 00:34:37.040 13:55:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:34:37.040 13:55:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 67156 00:34:37.040 13:55:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 67156 ']' 00:34:37.040 13:55:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:37.040 13:55:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:37.040 13:55:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:37.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:37.040 13:55:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:37.040 13:55:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:37.040 [2024-11-20 13:55:34.224896] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:34:37.040 [2024-11-20 13:55:34.224967] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:37.313 [2024-11-20 13:55:34.368396] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:37.313 [2024-11-20 13:55:34.437458] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:37.313 [2024-11-20 13:55:34.437509] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:37.313 [2024-11-20 13:55:34.437516] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:37.313 [2024-11-20 13:55:34.437522] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:37.313 [2024-11-20 13:55:34.437527] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:37.313 [2024-11-20 13:55:34.438439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:34:37.313 [2024-11-20 13:55:34.438486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:34:37.313 [2024-11-20 13:55:34.438686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:37.313 [2024-11-20 13:55:34.438693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:34:37.313 [2024-11-20 13:55:34.490832] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:34:37.879 13:55:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:37.879 13:55:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:34:37.879 13:55:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:37.879 13:55:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:37.879 13:55:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:37.879 13:55:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:37.879 13:55:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:37.879 13:55:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.879 13:55:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:37.879 [2024-11-20 13:55:35.155126] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:37.879 13:55:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.879 13:55:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:37.879 13:55:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.879 13:55:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:38.138 Malloc0 00:34:38.138 13:55:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.138 13:55:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:38.138 13:55:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.138 13:55:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:38.138 13:55:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.138 13:55:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:38.138 13:55:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.138 13:55:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:38.138 13:55:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.138 13:55:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:34:38.138 13:55:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.138 13:55:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:38.138 [2024-11-20 13:55:35.233501] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:34:38.138 13:55:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.138 13:55:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:34:38.138 13:55:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:34:38.138 13:55:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:34:38.138 13:55:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:34:38.138 13:55:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:38.138 13:55:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:38.138 { 00:34:38.138 "params": { 00:34:38.138 "name": "Nvme$subsystem", 00:34:38.138 "trtype": "$TEST_TRANSPORT", 00:34:38.138 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:38.138 "adrfam": "ipv4", 00:34:38.138 "trsvcid": "$NVMF_PORT", 00:34:38.138 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:38.138 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:38.138 "hdgst": ${hdgst:-false}, 00:34:38.138 "ddgst": ${ddgst:-false} 00:34:38.138 }, 00:34:38.138 "method": "bdev_nvme_attach_controller" 00:34:38.138 } 00:34:38.138 EOF 00:34:38.138 )") 00:34:38.138 13:55:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:34:38.138 13:55:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:34:38.138 13:55:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:34:38.138 13:55:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:38.138 "params": { 00:34:38.138 "name": "Nvme1", 00:34:38.138 "trtype": "tcp", 00:34:38.138 "traddr": "10.0.0.3", 00:34:38.138 "adrfam": "ipv4", 00:34:38.138 "trsvcid": "4420", 00:34:38.138 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:38.138 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:38.138 "hdgst": false, 00:34:38.138 "ddgst": false 00:34:38.138 }, 00:34:38.138 "method": "bdev_nvme_attach_controller" 00:34:38.138 }' 00:34:38.138 [2024-11-20 13:55:35.293398] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:34:38.138 [2024-11-20 13:55:35.293484] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67192 ] 00:34:38.138 [2024-11-20 13:55:35.459181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:38.397 [2024-11-20 13:55:35.544131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:38.397 [2024-11-20 13:55:35.544341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:38.397 [2024-11-20 13:55:35.544343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:38.397 [2024-11-20 13:55:35.632632] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:34:38.657 I/O targets: 00:34:38.657 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:34:38.657 00:34:38.657 00:34:38.657 CUnit - A unit testing framework for C - Version 2.1-3 00:34:38.657 http://cunit.sourceforge.net/ 00:34:38.657 00:34:38.658 00:34:38.658 Suite: bdevio tests on: Nvme1n1 00:34:38.658 Test: blockdev write read block ...passed 00:34:38.658 Test: blockdev write zeroes read block ...passed 00:34:38.658 Test: blockdev write zeroes read no split ...passed 00:34:38.658 Test: blockdev write zeroes read split ...passed 00:34:38.658 Test: blockdev write zeroes read split partial ...passed 00:34:38.658 Test: blockdev reset ...[2024-11-20 13:55:35.796463] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:34:38.658 [2024-11-20 13:55:35.796564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2492180 (9): Bad file descriptor 00:34:38.658 [2024-11-20 13:55:35.809184] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:34:38.658 passed 00:34:38.658 Test: blockdev write read 8 blocks ...passed 00:34:38.658 Test: blockdev write read size > 128k ...passed 00:34:38.658 Test: blockdev write read invalid size ...passed 00:34:38.658 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:38.658 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:38.658 Test: blockdev write read max offset ...passed 00:34:38.658 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:38.658 Test: blockdev writev readv 8 blocks ...passed 00:34:38.658 Test: blockdev writev readv 30 x 1block ...passed 00:34:38.658 Test: blockdev writev readv block ...passed 00:34:38.658 Test: blockdev writev readv size > 128k ...passed 00:34:38.658 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:38.658 Test: blockdev comparev and writev ...[2024-11-20 13:55:35.816963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:38.658 [2024-11-20 13:55:35.817004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:38.658 [2024-11-20 13:55:35.817020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:38.658 [2024-11-20 13:55:35.817029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:38.658 [2024-11-20 13:55:35.817357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:38.658 [2024-11-20 13:55:35.817378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:38.658 [2024-11-20 13:55:35.817391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:38.658 [2024-11-20 13:55:35.817399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:38.658 [2024-11-20 13:55:35.817688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:38.658 [2024-11-20 13:55:35.817718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:38.658 [2024-11-20 13:55:35.817731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:38.658 [2024-11-20 13:55:35.817739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:38.658 [2024-11-20 13:55:35.818057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:38.658 [2024-11-20 13:55:35.818079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:38.658 [2024-11-20 13:55:35.818091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:38.658 [2024-11-20 13:55:35.818099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:38.658 passed 00:34:38.658 Test: blockdev nvme passthru rw ...passed 00:34:38.658 Test: blockdev nvme passthru vendor specific ...[2024-11-20 13:55:35.819006] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:38.658 [2024-11-20 13:55:35.819038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:38.658 [2024-11-20 13:55:35.819145] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:38.658 [2024-11-20 13:55:35.819155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:38.658 [2024-11-20 13:55:35.819247] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:38.658 [2024-11-20 13:55:35.819258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:38.658 [2024-11-20 13:55:35.819350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:38.658 [2024-11-20 13:55:35.819369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:38.658 passed 00:34:38.658 Test: blockdev nvme admin passthru ...passed 00:34:38.658 Test: blockdev copy ...passed 00:34:38.658 00:34:38.658 Run Summary: Type Total Ran Passed Failed Inactive 00:34:38.658 suites 1 1 n/a 0 0 00:34:38.658 tests 23 23 23 0 0 00:34:38.658 asserts 152 152 152 0 n/a 00:34:38.658 00:34:38.658 Elapsed time = 0.157 seconds 00:34:38.918 13:55:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:38.918 13:55:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.918 13:55:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:38.918 13:55:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.918 13:55:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:34:38.918 13:55:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:34:38.918 13:55:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:38.918 13:55:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:34:38.918 13:55:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:38.918 13:55:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:34:38.918 13:55:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:38.918 13:55:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:38.918 rmmod nvme_tcp 00:34:38.918 rmmod nvme_fabrics 00:34:38.918 rmmod nvme_keyring 00:34:38.918 13:55:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:38.918 13:55:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:34:38.918 13:55:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:34:38.918 13:55:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 67156 ']' 00:34:38.918 13:55:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 67156 00:34:38.918 13:55:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 67156 ']' 00:34:38.918 13:55:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 67156 00:34:38.918 13:55:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:34:38.918 13:55:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:38.918 13:55:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67156 00:34:38.918 13:55:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:34:38.918 13:55:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:34:38.918 killing process with pid 67156 00:34:38.918 13:55:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67156' 00:34:38.918 13:55:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 67156 00:34:38.918 13:55:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 67156 00:34:39.177 13:55:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:39.177 13:55:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:39.177 13:55:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:39.177 13:55:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:34:39.177 13:55:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:34:39.177 13:55:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:34:39.177 13:55:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:39.177 13:55:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:39.177 13:55:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:34:39.177 13:55:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:34:39.177 13:55:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:34:39.436 13:55:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:34:39.436 13:55:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:34:39.436 13:55:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:34:39.436 13:55:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:34:39.436 13:55:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:34:39.437 13:55:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:34:39.437 13:55:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:34:39.437 13:55:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:34:39.437 13:55:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:34:39.437 13:55:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:39.437 13:55:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:39.437 13:55:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:34:39.437 13:55:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:39.437 13:55:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:39.437 13:55:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:39.437 13:55:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:34:39.437 00:34:39.437 real 0m3.240s 00:34:39.437 user 0m9.670s 00:34:39.437 sys 0m1.019s 00:34:39.437 13:55:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:39.437 13:55:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:39.437 ************************************ 00:34:39.437 END TEST nvmf_bdevio 00:34:39.437 ************************************ 00:34:39.703 13:55:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:34:39.703 00:34:39.703 real 2m36.920s 00:34:39.703 user 6m55.410s 00:34:39.703 sys 0m48.121s 00:34:39.703 13:55:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:39.703 13:55:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:34:39.703 ************************************ 00:34:39.703 END TEST nvmf_target_core 00:34:39.703 ************************************ 00:34:39.703 13:55:36 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:34:39.703 13:55:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:39.703 13:55:36 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:39.703 13:55:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:39.703 ************************************ 00:34:39.703 START TEST nvmf_target_extra 00:34:39.703 ************************************ 00:34:39.703 13:55:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:34:39.703 * Looking for test storage... 00:34:39.703 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:34:39.703 13:55:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:39.703 13:55:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:34:39.703 13:55:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:39.703 13:55:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:39.703 13:55:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:39.703 13:55:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:39.703 13:55:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:39.703 13:55:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:34:39.703 13:55:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:34:39.703 13:55:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:34:39.703 13:55:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:34:39.703 13:55:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:34:39.703 13:55:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:34:39.703 13:55:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:34:39.703 13:55:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:39.703 13:55:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:34:39.703 13:55:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:34:39.703 13:55:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:39.703 13:55:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:39.703 13:55:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:34:39.703 13:55:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:34:39.703 13:55:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:39.703 13:55:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:34:39.703 13:55:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:34:39.703 13:55:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:34:39.703 13:55:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:34:39.703 13:55:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:39.703 13:55:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:34:39.703 13:55:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:34:39.703 13:55:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:39.703 13:55:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:39.703 13:55:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:34:39.703 13:55:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:39.703 13:55:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:39.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:39.703 --rc genhtml_branch_coverage=1 00:34:39.703 --rc genhtml_function_coverage=1 00:34:39.703 --rc genhtml_legend=1 00:34:39.703 --rc geninfo_all_blocks=1 00:34:39.703 --rc geninfo_unexecuted_blocks=1 00:34:39.703 00:34:39.703 ' 00:34:39.703 13:55:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:39.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:39.703 --rc genhtml_branch_coverage=1 00:34:39.703 --rc genhtml_function_coverage=1 00:34:39.703 --rc genhtml_legend=1 00:34:39.703 --rc geninfo_all_blocks=1 00:34:39.703 --rc geninfo_unexecuted_blocks=1 00:34:39.703 00:34:39.703 ' 00:34:39.703 13:55:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:39.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:39.703 --rc genhtml_branch_coverage=1 00:34:39.703 --rc genhtml_function_coverage=1 00:34:39.703 --rc genhtml_legend=1 00:34:39.703 --rc geninfo_all_blocks=1 00:34:39.703 --rc geninfo_unexecuted_blocks=1 00:34:39.703 00:34:39.703 ' 00:34:39.703 13:55:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:39.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:39.703 --rc genhtml_branch_coverage=1 00:34:39.703 --rc genhtml_function_coverage=1 00:34:39.703 --rc genhtml_legend=1 00:34:39.703 --rc geninfo_all_blocks=1 00:34:39.703 --rc geninfo_unexecuted_blocks=1 00:34:39.704 00:34:39.704 ' 00:34:39.704 13:55:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:34:39.704 13:55:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=105ec898-1662-46bd-85be-b241e399edb9 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:39.963 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:34:39.963 ************************************ 00:34:39.963 START TEST nvmf_auth_target 00:34:39.963 ************************************ 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:34:39.963 * Looking for test storage... 00:34:39.963 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:39.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:39.963 --rc genhtml_branch_coverage=1 00:34:39.963 --rc genhtml_function_coverage=1 00:34:39.963 --rc genhtml_legend=1 00:34:39.963 --rc geninfo_all_blocks=1 00:34:39.963 --rc geninfo_unexecuted_blocks=1 00:34:39.963 00:34:39.963 ' 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:39.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:39.963 --rc genhtml_branch_coverage=1 00:34:39.963 --rc genhtml_function_coverage=1 00:34:39.963 --rc genhtml_legend=1 00:34:39.963 --rc geninfo_all_blocks=1 00:34:39.963 --rc geninfo_unexecuted_blocks=1 00:34:39.963 00:34:39.963 ' 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:39.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:39.963 --rc genhtml_branch_coverage=1 00:34:39.963 --rc genhtml_function_coverage=1 00:34:39.963 --rc genhtml_legend=1 00:34:39.963 --rc geninfo_all_blocks=1 00:34:39.963 --rc geninfo_unexecuted_blocks=1 00:34:39.963 00:34:39.963 ' 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:39.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:39.963 --rc genhtml_branch_coverage=1 00:34:39.963 --rc genhtml_function_coverage=1 00:34:39.963 --rc genhtml_legend=1 00:34:39.963 --rc geninfo_all_blocks=1 00:34:39.963 --rc geninfo_unexecuted_blocks=1 00:34:39.963 00:34:39.963 ' 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=105ec898-1662-46bd-85be-b241e399edb9 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:39.963 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:39.964 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:39.964 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:39.964 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:39.964 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:34:39.964 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:39.964 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:34:39.964 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:39.964 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:39.964 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:39.964 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:39.964 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:39.964 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:39.964 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:39.964 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:39.964 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:39.964 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:39.964 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:34:39.964 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:34:39.964 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:34:39.964 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:34:39.964 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:34:39.964 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:34:39.964 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:34:39.964 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:34:40.223 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:40.223 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:40.223 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:40.223 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:40.223 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:40.223 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:40.223 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:40.223 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:40.223 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:34:40.223 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:34:40.223 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:34:40.223 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:34:40.223 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:34:40.223 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:34:40.223 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:40.223 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:34:40.223 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:34:40.223 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:34:40.223 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:40.223 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:34:40.223 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:34:40.223 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:34:40.223 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:34:40.223 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:34:40.223 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:34:40.223 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:40.223 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:34:40.223 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:34:40.223 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:34:40.223 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:34:40.223 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:34:40.223 Cannot find device "nvmf_init_br" 00:34:40.223 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:34:40.223 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:34:40.223 Cannot find device "nvmf_init_br2" 00:34:40.223 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:34:40.223 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:34:40.223 Cannot find device "nvmf_tgt_br" 00:34:40.223 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:34:40.223 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:34:40.224 Cannot find device "nvmf_tgt_br2" 00:34:40.224 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:34:40.224 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:34:40.224 Cannot find device "nvmf_init_br" 00:34:40.224 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:34:40.224 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:34:40.224 Cannot find device "nvmf_init_br2" 00:34:40.224 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:34:40.224 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:34:40.224 Cannot find device "nvmf_tgt_br" 00:34:40.224 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:34:40.224 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:34:40.224 Cannot find device "nvmf_tgt_br2" 00:34:40.224 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:34:40.224 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:34:40.224 Cannot find device "nvmf_br" 00:34:40.224 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:34:40.224 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:34:40.224 Cannot find device "nvmf_init_if" 00:34:40.224 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:34:40.224 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:34:40.224 Cannot find device "nvmf_init_if2" 00:34:40.224 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:34:40.224 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:40.224 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:40.224 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:34:40.224 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:40.224 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:40.224 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:34:40.224 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:34:40.224 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:34:40.224 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:34:40.224 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:34:40.224 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:34:40.224 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:34:40.224 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:34:40.224 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:34:40.224 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:34:40.224 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:34:40.224 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:34:40.224 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:34:40.224 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:34:40.224 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:34:40.224 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:34:40.224 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:34:40.224 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:34:40.224 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:34:40.483 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:34:40.483 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:34:40.483 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:34:40.483 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:34:40.483 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:34:40.483 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:34:40.483 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:34:40.483 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:34:40.483 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:34:40.483 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:34:40.483 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:34:40.483 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:34:40.483 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:34:40.483 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:34:40.483 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:34:40.483 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:34:40.483 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:34:40.483 00:34:40.483 --- 10.0.0.3 ping statistics --- 00:34:40.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:40.483 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:34:40.483 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:34:40.483 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:34:40.483 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:34:40.483 00:34:40.483 --- 10.0.0.4 ping statistics --- 00:34:40.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:40.484 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:34:40.484 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:34:40.484 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:40.484 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:34:40.484 00:34:40.484 --- 10.0.0.1 ping statistics --- 00:34:40.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:40.484 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:34:40.484 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:34:40.484 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:40.484 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:34:40.484 00:34:40.484 --- 10.0.0.2 ping statistics --- 00:34:40.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:40.484 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:34:40.484 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:40.484 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:34:40.484 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:40.484 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:40.484 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:40.484 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:40.484 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:40.484 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:40.484 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:40.484 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:34:40.484 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:40.484 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:40.484 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:40.484 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=67476 00:34:40.484 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:34:40.484 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 67476 00:34:40.484 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67476 ']' 00:34:40.484 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:40.484 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:40.484 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:40.484 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:40.484 13:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:41.418 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:41.418 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:34:41.418 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:41.418 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:41.418 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:41.418 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:41.418 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=67508 00:34:41.418 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:34:41.418 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:34:41.418 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:34:41.418 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:34:41.418 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:41.418 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:34:41.418 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:34:41.418 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:34:41.418 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:41.418 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=44f08abbc96572bc6036e12f9d5c8fb566138af8de0a2919 00:34:41.418 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:34:41.418 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.VA4 00:34:41.418 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 44f08abbc96572bc6036e12f9d5c8fb566138af8de0a2919 0 00:34:41.418 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 44f08abbc96572bc6036e12f9d5c8fb566138af8de0a2919 0 00:34:41.418 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:34:41.418 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:41.418 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=44f08abbc96572bc6036e12f9d5c8fb566138af8de0a2919 00:34:41.418 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:34:41.418 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:34:41.698 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.VA4 00:34:41.698 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.VA4 00:34:41.698 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.VA4 00:34:41.698 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:34:41.698 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:34:41.698 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:41.698 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:34:41.698 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:34:41.698 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:34:41.698 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:41.698 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=23bf57b665e514573debc19414a949826d9b36f827711ba681b38511c876920e 00:34:41.698 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:34:41.698 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.LZ8 00:34:41.698 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 23bf57b665e514573debc19414a949826d9b36f827711ba681b38511c876920e 3 00:34:41.698 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 23bf57b665e514573debc19414a949826d9b36f827711ba681b38511c876920e 3 00:34:41.698 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:34:41.698 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:41.698 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=23bf57b665e514573debc19414a949826d9b36f827711ba681b38511c876920e 00:34:41.698 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:34:41.698 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:34:41.698 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.LZ8 00:34:41.698 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.LZ8 00:34:41.698 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.LZ8 00:34:41.698 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:34:41.698 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:34:41.698 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:41.698 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:34:41.698 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:34:41.698 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:34:41.699 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:41.699 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=baa246924da84aa86e15a192f9dedb5b 00:34:41.699 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:34:41.699 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.OQ0 00:34:41.699 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key baa246924da84aa86e15a192f9dedb5b 1 00:34:41.699 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 baa246924da84aa86e15a192f9dedb5b 1 00:34:41.699 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:34:41.699 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:41.699 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=baa246924da84aa86e15a192f9dedb5b 00:34:41.699 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:34:41.699 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:34:41.699 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.OQ0 00:34:41.699 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.OQ0 00:34:41.699 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.OQ0 00:34:41.699 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:34:41.699 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:34:41.699 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:41.699 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:34:41.699 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:34:41.699 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:34:41.699 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:41.699 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d15254a16558b0cbe6dc623f45ede2bb03a6ee581a300a36 00:34:41.699 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:34:41.699 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.cZV 00:34:41.699 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d15254a16558b0cbe6dc623f45ede2bb03a6ee581a300a36 2 00:34:41.699 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d15254a16558b0cbe6dc623f45ede2bb03a6ee581a300a36 2 00:34:41.699 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:34:41.699 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:41.699 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d15254a16558b0cbe6dc623f45ede2bb03a6ee581a300a36 00:34:41.699 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:34:41.699 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:34:41.699 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.cZV 00:34:41.699 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.cZV 00:34:41.699 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.cZV 00:34:41.699 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:34:41.699 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:34:41.699 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:41.699 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:34:41.699 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:34:41.699 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:34:41.699 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:41.699 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ffdc9a638d037aeb53b84c1046e8934ccedbf97d6dafa8c7 00:34:41.699 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:34:41.699 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Ju5 00:34:41.699 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ffdc9a638d037aeb53b84c1046e8934ccedbf97d6dafa8c7 2 00:34:41.699 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ffdc9a638d037aeb53b84c1046e8934ccedbf97d6dafa8c7 2 00:34:41.699 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:34:41.699 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:41.699 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ffdc9a638d037aeb53b84c1046e8934ccedbf97d6dafa8c7 00:34:41.699 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:34:41.699 13:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:34:41.958 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Ju5 00:34:41.958 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Ju5 00:34:41.958 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.Ju5 00:34:41.958 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:34:41.958 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:34:41.958 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:41.958 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:34:41.958 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:34:41.958 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:34:41.958 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:41.958 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=9a3e0c9334a72eaeeda07d2ea549469c 00:34:41.958 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:34:41.958 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.IF4 00:34:41.958 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 9a3e0c9334a72eaeeda07d2ea549469c 1 00:34:41.958 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 9a3e0c9334a72eaeeda07d2ea549469c 1 00:34:41.958 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:34:41.958 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:41.958 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=9a3e0c9334a72eaeeda07d2ea549469c 00:34:41.958 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:34:41.958 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:34:41.958 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.IF4 00:34:41.958 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.IF4 00:34:41.958 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.IF4 00:34:41.958 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:34:41.958 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:34:41.958 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:41.958 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:34:41.958 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:34:41.958 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:34:41.958 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:41.958 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=7791f0bf078164179210d4e68e5e549c92d12bb679732873510f47c5ab2bed19 00:34:41.958 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:34:41.958 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Jfl 00:34:41.958 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 7791f0bf078164179210d4e68e5e549c92d12bb679732873510f47c5ab2bed19 3 00:34:41.958 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 7791f0bf078164179210d4e68e5e549c92d12bb679732873510f47c5ab2bed19 3 00:34:41.958 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:34:41.958 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:41.958 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=7791f0bf078164179210d4e68e5e549c92d12bb679732873510f47c5ab2bed19 00:34:41.958 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:34:41.958 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:34:41.958 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Jfl 00:34:41.958 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Jfl 00:34:41.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:41.958 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.Jfl 00:34:41.958 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:34:41.958 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 67476 00:34:41.958 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67476 ']' 00:34:41.958 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:41.958 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:41.958 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:41.958 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:41.958 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:42.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:34:42.216 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:42.216 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:34:42.216 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 67508 /var/tmp/host.sock 00:34:42.216 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67508 ']' 00:34:42.216 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:34:42.216 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:42.216 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:34:42.216 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:42.216 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:42.475 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:42.475 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:34:42.475 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:34:42.475 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.475 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:42.475 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.475 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:34:42.475 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.VA4 00:34:42.475 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.475 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:42.475 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.475 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.VA4 00:34:42.475 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.VA4 00:34:42.733 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.LZ8 ]] 00:34:42.733 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.LZ8 00:34:42.733 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.734 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:42.734 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.734 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.LZ8 00:34:42.734 13:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.LZ8 00:34:42.992 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:34:42.992 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.OQ0 00:34:42.992 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.992 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:42.992 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.992 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.OQ0 00:34:42.992 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.OQ0 00:34:43.251 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.cZV ]] 00:34:43.251 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.cZV 00:34:43.251 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.251 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:43.251 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.251 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.cZV 00:34:43.251 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.cZV 00:34:43.511 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:34:43.511 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Ju5 00:34:43.511 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.511 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:43.511 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.511 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Ju5 00:34:43.511 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Ju5 00:34:43.770 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.IF4 ]] 00:34:43.770 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.IF4 00:34:43.770 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.770 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:43.770 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.770 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.IF4 00:34:43.770 13:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.IF4 00:34:43.770 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:34:43.770 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Jfl 00:34:43.770 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.770 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:43.770 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.770 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Jfl 00:34:43.770 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Jfl 00:34:44.028 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:34:44.028 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:34:44.028 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:34:44.028 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:34:44.028 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:34:44.028 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:34:44.287 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:34:44.287 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:34:44.287 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:34:44.287 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:34:44.287 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:34:44.287 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:34:44.287 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:44.287 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.287 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:44.287 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.287 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:44.287 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:44.287 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:44.546 00:34:44.546 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:34:44.546 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:44.546 13:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:34:44.806 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:44.806 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:44.806 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.806 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:44.806 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.806 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:34:44.806 { 00:34:44.806 "cntlid": 1, 00:34:44.806 "qid": 0, 00:34:44.806 "state": "enabled", 00:34:44.806 "thread": "nvmf_tgt_poll_group_000", 00:34:44.806 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:34:44.806 "listen_address": { 00:34:44.806 "trtype": "TCP", 00:34:44.806 "adrfam": "IPv4", 00:34:44.806 "traddr": "10.0.0.3", 00:34:44.806 "trsvcid": "4420" 00:34:44.806 }, 00:34:44.806 "peer_address": { 00:34:44.806 "trtype": "TCP", 00:34:44.806 "adrfam": "IPv4", 00:34:44.806 "traddr": "10.0.0.1", 00:34:44.806 "trsvcid": "57226" 00:34:44.806 }, 00:34:44.806 "auth": { 00:34:44.806 "state": "completed", 00:34:44.806 "digest": "sha256", 00:34:44.806 "dhgroup": "null" 00:34:44.806 } 00:34:44.806 } 00:34:44.806 ]' 00:34:44.806 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:34:45.065 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:34:45.065 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:34:45.065 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:34:45.065 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:34:45.065 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:45.065 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:45.065 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:45.323 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDRmMDhhYmJjOTY1NzJiYzYwMzZlMTJmOWQ1YzhmYjU2NjEzOGFmOGRlMGEyOTE5gXY1NA==: --dhchap-ctrl-secret DHHC-1:03:MjNiZjU3YjY2NWU1MTQ1NzNkZWJjMTk0MTRhOTQ5ODI2ZDliMzZmODI3NzExYmE2ODFiMzg1MTFjODc2OTIwZb/bPv4=: 00:34:45.323 13:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:00:NDRmMDhhYmJjOTY1NzJiYzYwMzZlMTJmOWQ1YzhmYjU2NjEzOGFmOGRlMGEyOTE5gXY1NA==: --dhchap-ctrl-secret DHHC-1:03:MjNiZjU3YjY2NWU1MTQ1NzNkZWJjMTk0MTRhOTQ5ODI2ZDliMzZmODI3NzExYmE2ODFiMzg1MTFjODc2OTIwZb/bPv4=: 00:34:49.510 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:49.510 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:49.510 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:34:49.510 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.510 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:49.510 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.510 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:34:49.510 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:34:49.510 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:34:49.510 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:34:49.510 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:34:49.510 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:34:49.510 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:34:49.510 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:34:49.510 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:34:49.510 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:49.510 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.510 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:49.510 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.510 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:49.510 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:49.510 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:49.510 00:34:49.510 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:34:49.510 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:49.510 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:34:49.769 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:49.769 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:49.769 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.769 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:49.769 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.769 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:34:49.769 { 00:34:49.769 "cntlid": 3, 00:34:49.769 "qid": 0, 00:34:49.769 "state": "enabled", 00:34:49.769 "thread": "nvmf_tgt_poll_group_000", 00:34:49.769 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:34:49.769 "listen_address": { 00:34:49.769 "trtype": "TCP", 00:34:49.769 "adrfam": "IPv4", 00:34:49.769 "traddr": "10.0.0.3", 00:34:49.769 "trsvcid": "4420" 00:34:49.769 }, 00:34:49.769 "peer_address": { 00:34:49.769 "trtype": "TCP", 00:34:49.769 "adrfam": "IPv4", 00:34:49.770 "traddr": "10.0.0.1", 00:34:49.770 "trsvcid": "57250" 00:34:49.770 }, 00:34:49.770 "auth": { 00:34:49.770 "state": "completed", 00:34:49.770 "digest": "sha256", 00:34:49.770 "dhgroup": "null" 00:34:49.770 } 00:34:49.770 } 00:34:49.770 ]' 00:34:49.770 13:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:34:49.770 13:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:34:49.770 13:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:34:49.770 13:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:34:49.770 13:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:34:50.028 13:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:50.028 13:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:50.028 13:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:50.028 13:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmFhMjQ2OTI0ZGE4NGFhODZlMTVhMTkyZjlkZWRiNWJI/dgb: --dhchap-ctrl-secret DHHC-1:02:ZDE1MjU0YTE2NTU4YjBjYmU2ZGM2MjNmNDVlZGUyYmIwM2E2ZWU1ODFhMzAwYTM2myPlpA==: 00:34:50.028 13:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:01:YmFhMjQ2OTI0ZGE4NGFhODZlMTVhMTkyZjlkZWRiNWJI/dgb: --dhchap-ctrl-secret DHHC-1:02:ZDE1MjU0YTE2NTU4YjBjYmU2ZGM2MjNmNDVlZGUyYmIwM2E2ZWU1ODFhMzAwYTM2myPlpA==: 00:34:50.966 13:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:50.966 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:50.966 13:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:34:50.966 13:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.966 13:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:50.966 13:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.966 13:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:34:50.966 13:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:34:50.966 13:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:34:50.966 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:34:50.966 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:34:50.966 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:34:50.966 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:34:50.966 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:34:50.966 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:34:50.966 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:50.966 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.966 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:50.966 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.966 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:50.966 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:50.966 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:51.226 00:34:51.226 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:34:51.226 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:51.226 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:34:51.484 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:51.484 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:51.484 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.484 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:51.484 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.484 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:34:51.484 { 00:34:51.484 "cntlid": 5, 00:34:51.484 "qid": 0, 00:34:51.484 "state": "enabled", 00:34:51.484 "thread": "nvmf_tgt_poll_group_000", 00:34:51.484 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:34:51.484 "listen_address": { 00:34:51.484 "trtype": "TCP", 00:34:51.484 "adrfam": "IPv4", 00:34:51.484 "traddr": "10.0.0.3", 00:34:51.484 "trsvcid": "4420" 00:34:51.484 }, 00:34:51.484 "peer_address": { 00:34:51.484 "trtype": "TCP", 00:34:51.484 "adrfam": "IPv4", 00:34:51.484 "traddr": "10.0.0.1", 00:34:51.484 "trsvcid": "57262" 00:34:51.484 }, 00:34:51.484 "auth": { 00:34:51.484 "state": "completed", 00:34:51.484 "digest": "sha256", 00:34:51.485 "dhgroup": "null" 00:34:51.485 } 00:34:51.485 } 00:34:51.485 ]' 00:34:51.485 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:34:51.485 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:34:51.485 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:34:51.743 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:34:51.743 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:34:51.743 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:51.743 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:51.743 13:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:52.001 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmZkYzlhNjM4ZDAzN2FlYjUzYjg0YzEwNDZlODkzNGNjZWRiZjk3ZDZkYWZhOGM3YSt+tw==: --dhchap-ctrl-secret DHHC-1:01:OWEzZTBjOTMzNGE3MmVhZWVkYTA3ZDJlYTU0OTQ2OWNbZz1G: 00:34:52.001 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:02:ZmZkYzlhNjM4ZDAzN2FlYjUzYjg0YzEwNDZlODkzNGNjZWRiZjk3ZDZkYWZhOGM3YSt+tw==: --dhchap-ctrl-secret DHHC-1:01:OWEzZTBjOTMzNGE3MmVhZWVkYTA3ZDJlYTU0OTQ2OWNbZz1G: 00:34:52.564 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:52.564 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:52.564 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:34:52.564 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.564 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:52.564 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.564 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:34:52.564 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:34:52.565 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:34:52.823 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:34:52.823 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:34:52.823 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:34:52.823 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:34:52.823 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:34:52.823 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:34:52.823 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key3 00:34:52.823 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.823 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:52.823 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.823 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:34:52.823 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:34:52.823 13:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:34:53.081 00:34:53.081 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:34:53.081 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:34:53.081 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:53.339 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:53.339 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:53.339 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.339 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:53.339 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.339 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:34:53.339 { 00:34:53.339 "cntlid": 7, 00:34:53.339 "qid": 0, 00:34:53.339 "state": "enabled", 00:34:53.339 "thread": "nvmf_tgt_poll_group_000", 00:34:53.340 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:34:53.340 "listen_address": { 00:34:53.340 "trtype": "TCP", 00:34:53.340 "adrfam": "IPv4", 00:34:53.340 "traddr": "10.0.0.3", 00:34:53.340 "trsvcid": "4420" 00:34:53.340 }, 00:34:53.340 "peer_address": { 00:34:53.340 "trtype": "TCP", 00:34:53.340 "adrfam": "IPv4", 00:34:53.340 "traddr": "10.0.0.1", 00:34:53.340 "trsvcid": "41994" 00:34:53.340 }, 00:34:53.340 "auth": { 00:34:53.340 "state": "completed", 00:34:53.340 "digest": "sha256", 00:34:53.340 "dhgroup": "null" 00:34:53.340 } 00:34:53.340 } 00:34:53.340 ]' 00:34:53.340 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:34:53.340 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:34:53.340 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:34:53.340 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:34:53.340 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:34:53.340 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:53.340 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:53.340 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:53.598 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Nzc5MWYwYmYwNzgxNjQxNzkyMTBkNGU2OGU1ZTU0OWM5MmQxMmJiNjc5NzMyODczNTEwZjQ3YzVhYjJiZWQxOUnX21U=: 00:34:53.598 13:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:03:Nzc5MWYwYmYwNzgxNjQxNzkyMTBkNGU2OGU1ZTU0OWM5MmQxMmJiNjc5NzMyODczNTEwZjQ3YzVhYjJiZWQxOUnX21U=: 00:34:54.164 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:54.164 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:54.164 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:34:54.164 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.164 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:54.164 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.164 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:34:54.164 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:34:54.164 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:54.164 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:54.423 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:34:54.423 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:34:54.423 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:34:54.423 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:34:54.423 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:34:54.423 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:34:54.423 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:54.423 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.423 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:54.423 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.423 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:54.423 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:54.423 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:54.681 00:34:54.681 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:34:54.681 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:34:54.681 13:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:54.940 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:54.940 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:54.940 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.940 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:54.940 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.940 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:34:54.940 { 00:34:54.940 "cntlid": 9, 00:34:54.940 "qid": 0, 00:34:54.940 "state": "enabled", 00:34:54.940 "thread": "nvmf_tgt_poll_group_000", 00:34:54.940 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:34:54.940 "listen_address": { 00:34:54.940 "trtype": "TCP", 00:34:54.940 "adrfam": "IPv4", 00:34:54.940 "traddr": "10.0.0.3", 00:34:54.940 "trsvcid": "4420" 00:34:54.940 }, 00:34:54.940 "peer_address": { 00:34:54.940 "trtype": "TCP", 00:34:54.940 "adrfam": "IPv4", 00:34:54.940 "traddr": "10.0.0.1", 00:34:54.940 "trsvcid": "42008" 00:34:54.940 }, 00:34:54.940 "auth": { 00:34:54.940 "state": "completed", 00:34:54.940 "digest": "sha256", 00:34:54.940 "dhgroup": "ffdhe2048" 00:34:54.940 } 00:34:54.940 } 00:34:54.940 ]' 00:34:54.940 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:34:54.940 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:34:54.940 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:34:54.940 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:34:54.940 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:34:55.198 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:55.198 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:55.198 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:55.198 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDRmMDhhYmJjOTY1NzJiYzYwMzZlMTJmOWQ1YzhmYjU2NjEzOGFmOGRlMGEyOTE5gXY1NA==: --dhchap-ctrl-secret DHHC-1:03:MjNiZjU3YjY2NWU1MTQ1NzNkZWJjMTk0MTRhOTQ5ODI2ZDliMzZmODI3NzExYmE2ODFiMzg1MTFjODc2OTIwZb/bPv4=: 00:34:55.198 13:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:00:NDRmMDhhYmJjOTY1NzJiYzYwMzZlMTJmOWQ1YzhmYjU2NjEzOGFmOGRlMGEyOTE5gXY1NA==: --dhchap-ctrl-secret DHHC-1:03:MjNiZjU3YjY2NWU1MTQ1NzNkZWJjMTk0MTRhOTQ5ODI2ZDliMzZmODI3NzExYmE2ODFiMzg1MTFjODc2OTIwZb/bPv4=: 00:34:56.133 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:56.133 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:56.133 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:34:56.133 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.133 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:56.133 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.133 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:34:56.133 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:56.133 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:56.133 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:34:56.133 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:34:56.133 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:34:56.133 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:34:56.133 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:34:56.133 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:34:56.133 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:56.133 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.133 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:56.133 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.133 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:56.133 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:56.133 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:56.392 00:34:56.392 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:34:56.392 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:34:56.392 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:56.652 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:56.652 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:56.652 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.652 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:56.652 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.652 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:34:56.652 { 00:34:56.652 "cntlid": 11, 00:34:56.652 "qid": 0, 00:34:56.652 "state": "enabled", 00:34:56.652 "thread": "nvmf_tgt_poll_group_000", 00:34:56.652 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:34:56.652 "listen_address": { 00:34:56.652 "trtype": "TCP", 00:34:56.652 "adrfam": "IPv4", 00:34:56.652 "traddr": "10.0.0.3", 00:34:56.652 "trsvcid": "4420" 00:34:56.652 }, 00:34:56.652 "peer_address": { 00:34:56.652 "trtype": "TCP", 00:34:56.652 "adrfam": "IPv4", 00:34:56.652 "traddr": "10.0.0.1", 00:34:56.652 "trsvcid": "42032" 00:34:56.652 }, 00:34:56.652 "auth": { 00:34:56.652 "state": "completed", 00:34:56.652 "digest": "sha256", 00:34:56.652 "dhgroup": "ffdhe2048" 00:34:56.652 } 00:34:56.652 } 00:34:56.652 ]' 00:34:56.652 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:34:56.652 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:34:56.652 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:34:56.911 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:34:56.911 13:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:34:56.911 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:56.911 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:56.911 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:57.170 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmFhMjQ2OTI0ZGE4NGFhODZlMTVhMTkyZjlkZWRiNWJI/dgb: --dhchap-ctrl-secret DHHC-1:02:ZDE1MjU0YTE2NTU4YjBjYmU2ZGM2MjNmNDVlZGUyYmIwM2E2ZWU1ODFhMzAwYTM2myPlpA==: 00:34:57.170 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:01:YmFhMjQ2OTI0ZGE4NGFhODZlMTVhMTkyZjlkZWRiNWJI/dgb: --dhchap-ctrl-secret DHHC-1:02:ZDE1MjU0YTE2NTU4YjBjYmU2ZGM2MjNmNDVlZGUyYmIwM2E2ZWU1ODFhMzAwYTM2myPlpA==: 00:34:57.739 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:57.739 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:57.739 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:34:57.739 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.739 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:57.739 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.739 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:34:57.739 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:57.739 13:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:57.998 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:34:57.998 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:34:57.998 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:34:57.999 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:34:57.999 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:34:57.999 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:34:57.999 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:57.999 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.999 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:57.999 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.999 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:57.999 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:57.999 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:58.259 00:34:58.259 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:34:58.259 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:34:58.259 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:58.517 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:58.517 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:58.517 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.517 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:58.517 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.517 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:34:58.517 { 00:34:58.517 "cntlid": 13, 00:34:58.517 "qid": 0, 00:34:58.517 "state": "enabled", 00:34:58.517 "thread": "nvmf_tgt_poll_group_000", 00:34:58.517 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:34:58.517 "listen_address": { 00:34:58.517 "trtype": "TCP", 00:34:58.517 "adrfam": "IPv4", 00:34:58.517 "traddr": "10.0.0.3", 00:34:58.517 "trsvcid": "4420" 00:34:58.517 }, 00:34:58.517 "peer_address": { 00:34:58.517 "trtype": "TCP", 00:34:58.517 "adrfam": "IPv4", 00:34:58.517 "traddr": "10.0.0.1", 00:34:58.517 "trsvcid": "42066" 00:34:58.517 }, 00:34:58.517 "auth": { 00:34:58.517 "state": "completed", 00:34:58.517 "digest": "sha256", 00:34:58.517 "dhgroup": "ffdhe2048" 00:34:58.517 } 00:34:58.517 } 00:34:58.517 ]' 00:34:58.517 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:34:58.517 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:34:58.517 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:34:58.517 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:34:58.517 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:34:58.517 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:58.517 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:58.517 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:58.776 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmZkYzlhNjM4ZDAzN2FlYjUzYjg0YzEwNDZlODkzNGNjZWRiZjk3ZDZkYWZhOGM3YSt+tw==: --dhchap-ctrl-secret DHHC-1:01:OWEzZTBjOTMzNGE3MmVhZWVkYTA3ZDJlYTU0OTQ2OWNbZz1G: 00:34:58.776 13:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:02:ZmZkYzlhNjM4ZDAzN2FlYjUzYjg0YzEwNDZlODkzNGNjZWRiZjk3ZDZkYWZhOGM3YSt+tw==: --dhchap-ctrl-secret DHHC-1:01:OWEzZTBjOTMzNGE3MmVhZWVkYTA3ZDJlYTU0OTQ2OWNbZz1G: 00:34:59.343 13:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:59.343 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:59.343 13:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:34:59.343 13:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.343 13:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:59.343 13:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.343 13:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:34:59.343 13:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:59.343 13:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:59.603 13:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:34:59.603 13:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:34:59.603 13:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:34:59.603 13:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:34:59.603 13:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:34:59.603 13:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:34:59.603 13:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key3 00:34:59.603 13:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.603 13:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:59.603 13:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.603 13:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:34:59.603 13:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:34:59.603 13:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:34:59.862 00:34:59.862 13:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:34:59.862 13:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:59.862 13:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:35:00.121 13:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:00.121 13:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:35:00.121 13:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.121 13:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:00.121 13:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.121 13:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:35:00.121 { 00:35:00.121 "cntlid": 15, 00:35:00.121 "qid": 0, 00:35:00.121 "state": "enabled", 00:35:00.121 "thread": "nvmf_tgt_poll_group_000", 00:35:00.121 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:35:00.121 "listen_address": { 00:35:00.121 "trtype": "TCP", 00:35:00.121 "adrfam": "IPv4", 00:35:00.121 "traddr": "10.0.0.3", 00:35:00.121 "trsvcid": "4420" 00:35:00.121 }, 00:35:00.121 "peer_address": { 00:35:00.121 "trtype": "TCP", 00:35:00.121 "adrfam": "IPv4", 00:35:00.121 "traddr": "10.0.0.1", 00:35:00.121 "trsvcid": "42088" 00:35:00.121 }, 00:35:00.121 "auth": { 00:35:00.121 "state": "completed", 00:35:00.121 "digest": "sha256", 00:35:00.121 "dhgroup": "ffdhe2048" 00:35:00.121 } 00:35:00.121 } 00:35:00.121 ]' 00:35:00.121 13:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:35:00.121 13:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:35:00.121 13:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:35:00.121 13:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:35:00.121 13:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:35:00.121 13:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:35:00.121 13:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:35:00.121 13:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:35:00.380 13:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Nzc5MWYwYmYwNzgxNjQxNzkyMTBkNGU2OGU1ZTU0OWM5MmQxMmJiNjc5NzMyODczNTEwZjQ3YzVhYjJiZWQxOUnX21U=: 00:35:00.380 13:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:03:Nzc5MWYwYmYwNzgxNjQxNzkyMTBkNGU2OGU1ZTU0OWM5MmQxMmJiNjc5NzMyODczNTEwZjQ3YzVhYjJiZWQxOUnX21U=: 00:35:00.948 13:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:35:00.948 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:35:00.948 13:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:35:00.949 13:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.949 13:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:00.949 13:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.949 13:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:35:00.949 13:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:35:00.949 13:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:00.949 13:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:01.207 13:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:35:01.207 13:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:35:01.207 13:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:35:01.207 13:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:35:01.207 13:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:35:01.207 13:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:35:01.207 13:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:01.207 13:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.207 13:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:01.207 13:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.207 13:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:01.207 13:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:01.207 13:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:01.465 00:35:01.724 13:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:35:01.724 13:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:35:01.724 13:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:35:01.724 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:01.724 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:35:01.724 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.724 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:01.724 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.724 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:35:01.724 { 00:35:01.724 "cntlid": 17, 00:35:01.724 "qid": 0, 00:35:01.724 "state": "enabled", 00:35:01.724 "thread": "nvmf_tgt_poll_group_000", 00:35:01.724 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:35:01.724 "listen_address": { 00:35:01.724 "trtype": "TCP", 00:35:01.724 "adrfam": "IPv4", 00:35:01.724 "traddr": "10.0.0.3", 00:35:01.724 "trsvcid": "4420" 00:35:01.724 }, 00:35:01.724 "peer_address": { 00:35:01.724 "trtype": "TCP", 00:35:01.724 "adrfam": "IPv4", 00:35:01.724 "traddr": "10.0.0.1", 00:35:01.724 "trsvcid": "42106" 00:35:01.724 }, 00:35:01.724 "auth": { 00:35:01.724 "state": "completed", 00:35:01.724 "digest": "sha256", 00:35:01.724 "dhgroup": "ffdhe3072" 00:35:01.724 } 00:35:01.724 } 00:35:01.724 ]' 00:35:01.724 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:35:02.084 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:35:02.084 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:35:02.084 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:35:02.084 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:35:02.084 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:35:02.084 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:35:02.084 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:35:02.084 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDRmMDhhYmJjOTY1NzJiYzYwMzZlMTJmOWQ1YzhmYjU2NjEzOGFmOGRlMGEyOTE5gXY1NA==: --dhchap-ctrl-secret DHHC-1:03:MjNiZjU3YjY2NWU1MTQ1NzNkZWJjMTk0MTRhOTQ5ODI2ZDliMzZmODI3NzExYmE2ODFiMzg1MTFjODc2OTIwZb/bPv4=: 00:35:02.084 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:00:NDRmMDhhYmJjOTY1NzJiYzYwMzZlMTJmOWQ1YzhmYjU2NjEzOGFmOGRlMGEyOTE5gXY1NA==: --dhchap-ctrl-secret DHHC-1:03:MjNiZjU3YjY2NWU1MTQ1NzNkZWJjMTk0MTRhOTQ5ODI2ZDliMzZmODI3NzExYmE2ODFiMzg1MTFjODc2OTIwZb/bPv4=: 00:35:02.650 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:35:02.650 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:35:02.650 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:35:02.650 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.650 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:02.650 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.650 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:35:02.650 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:02.650 13:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:02.909 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:35:02.909 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:35:02.909 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:35:02.909 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:35:02.909 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:35:02.909 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:35:02.909 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:02.909 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.909 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:02.909 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.909 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:02.909 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:02.909 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:03.169 00:35:03.429 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:35:03.429 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:35:03.429 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:35:03.429 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:03.429 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:35:03.429 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.429 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:03.688 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.688 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:35:03.688 { 00:35:03.688 "cntlid": 19, 00:35:03.688 "qid": 0, 00:35:03.688 "state": "enabled", 00:35:03.688 "thread": "nvmf_tgt_poll_group_000", 00:35:03.688 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:35:03.688 "listen_address": { 00:35:03.688 "trtype": "TCP", 00:35:03.688 "adrfam": "IPv4", 00:35:03.688 "traddr": "10.0.0.3", 00:35:03.688 "trsvcid": "4420" 00:35:03.688 }, 00:35:03.688 "peer_address": { 00:35:03.688 "trtype": "TCP", 00:35:03.688 "adrfam": "IPv4", 00:35:03.688 "traddr": "10.0.0.1", 00:35:03.688 "trsvcid": "41196" 00:35:03.688 }, 00:35:03.688 "auth": { 00:35:03.688 "state": "completed", 00:35:03.688 "digest": "sha256", 00:35:03.688 "dhgroup": "ffdhe3072" 00:35:03.688 } 00:35:03.688 } 00:35:03.688 ]' 00:35:03.688 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:35:03.688 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:35:03.688 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:35:03.688 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:35:03.688 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:35:03.688 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:35:03.688 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:35:03.688 13:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:35:03.948 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmFhMjQ2OTI0ZGE4NGFhODZlMTVhMTkyZjlkZWRiNWJI/dgb: --dhchap-ctrl-secret DHHC-1:02:ZDE1MjU0YTE2NTU4YjBjYmU2ZGM2MjNmNDVlZGUyYmIwM2E2ZWU1ODFhMzAwYTM2myPlpA==: 00:35:03.948 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:01:YmFhMjQ2OTI0ZGE4NGFhODZlMTVhMTkyZjlkZWRiNWJI/dgb: --dhchap-ctrl-secret DHHC-1:02:ZDE1MjU0YTE2NTU4YjBjYmU2ZGM2MjNmNDVlZGUyYmIwM2E2ZWU1ODFhMzAwYTM2myPlpA==: 00:35:04.515 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:35:04.515 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:35:04.515 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:35:04.515 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.515 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:04.515 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.515 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:35:04.515 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:04.515 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:04.774 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:35:04.774 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:35:04.774 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:35:04.774 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:35:04.774 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:35:04.774 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:35:04.774 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:04.774 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.774 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:04.774 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.774 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:04.774 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:04.774 13:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:05.032 00:35:05.032 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:35:05.032 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:35:05.032 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:35:05.291 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:05.291 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:35:05.291 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.291 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:05.291 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.291 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:35:05.291 { 00:35:05.291 "cntlid": 21, 00:35:05.291 "qid": 0, 00:35:05.291 "state": "enabled", 00:35:05.291 "thread": "nvmf_tgt_poll_group_000", 00:35:05.291 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:35:05.291 "listen_address": { 00:35:05.291 "trtype": "TCP", 00:35:05.291 "adrfam": "IPv4", 00:35:05.291 "traddr": "10.0.0.3", 00:35:05.291 "trsvcid": "4420" 00:35:05.291 }, 00:35:05.291 "peer_address": { 00:35:05.291 "trtype": "TCP", 00:35:05.291 "adrfam": "IPv4", 00:35:05.291 "traddr": "10.0.0.1", 00:35:05.291 "trsvcid": "41226" 00:35:05.291 }, 00:35:05.291 "auth": { 00:35:05.291 "state": "completed", 00:35:05.291 "digest": "sha256", 00:35:05.291 "dhgroup": "ffdhe3072" 00:35:05.291 } 00:35:05.291 } 00:35:05.291 ]' 00:35:05.291 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:35:05.291 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:35:05.291 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:35:05.291 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:35:05.291 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:35:05.550 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:35:05.550 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:35:05.550 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:35:05.550 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmZkYzlhNjM4ZDAzN2FlYjUzYjg0YzEwNDZlODkzNGNjZWRiZjk3ZDZkYWZhOGM3YSt+tw==: --dhchap-ctrl-secret DHHC-1:01:OWEzZTBjOTMzNGE3MmVhZWVkYTA3ZDJlYTU0OTQ2OWNbZz1G: 00:35:05.550 13:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:02:ZmZkYzlhNjM4ZDAzN2FlYjUzYjg0YzEwNDZlODkzNGNjZWRiZjk3ZDZkYWZhOGM3YSt+tw==: --dhchap-ctrl-secret DHHC-1:01:OWEzZTBjOTMzNGE3MmVhZWVkYTA3ZDJlYTU0OTQ2OWNbZz1G: 00:35:06.145 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:35:06.145 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:35:06.145 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:35:06.145 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.145 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:06.145 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.145 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:35:06.145 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:06.145 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:06.410 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:35:06.410 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:35:06.410 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:35:06.410 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:35:06.410 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:35:06.410 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:35:06.410 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key3 00:35:06.410 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.410 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:06.410 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.410 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:35:06.410 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:35:06.410 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:35:06.676 00:35:06.676 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:35:06.676 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:35:06.676 13:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:35:06.935 13:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:06.935 13:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:35:06.935 13:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.935 13:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:06.935 13:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.935 13:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:35:06.935 { 00:35:06.935 "cntlid": 23, 00:35:06.935 "qid": 0, 00:35:06.935 "state": "enabled", 00:35:06.935 "thread": "nvmf_tgt_poll_group_000", 00:35:06.935 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:35:06.935 "listen_address": { 00:35:06.935 "trtype": "TCP", 00:35:06.935 "adrfam": "IPv4", 00:35:06.935 "traddr": "10.0.0.3", 00:35:06.935 "trsvcid": "4420" 00:35:06.935 }, 00:35:06.935 "peer_address": { 00:35:06.935 "trtype": "TCP", 00:35:06.935 "adrfam": "IPv4", 00:35:06.935 "traddr": "10.0.0.1", 00:35:06.935 "trsvcid": "41246" 00:35:06.935 }, 00:35:06.935 "auth": { 00:35:06.935 "state": "completed", 00:35:06.935 "digest": "sha256", 00:35:06.935 "dhgroup": "ffdhe3072" 00:35:06.935 } 00:35:06.935 } 00:35:06.935 ]' 00:35:06.935 13:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:35:07.195 13:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:35:07.195 13:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:35:07.195 13:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:35:07.195 13:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:35:07.195 13:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:35:07.195 13:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:35:07.195 13:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:35:07.454 13:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Nzc5MWYwYmYwNzgxNjQxNzkyMTBkNGU2OGU1ZTU0OWM5MmQxMmJiNjc5NzMyODczNTEwZjQ3YzVhYjJiZWQxOUnX21U=: 00:35:07.454 13:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:03:Nzc5MWYwYmYwNzgxNjQxNzkyMTBkNGU2OGU1ZTU0OWM5MmQxMmJiNjc5NzMyODczNTEwZjQ3YzVhYjJiZWQxOUnX21U=: 00:35:08.021 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:35:08.022 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:35:08.022 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:35:08.022 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.022 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:08.022 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.022 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:35:08.022 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:35:08.022 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:08.022 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:08.280 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:35:08.280 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:35:08.280 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:35:08.280 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:35:08.280 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:35:08.280 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:35:08.280 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:08.280 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.280 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:08.280 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.280 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:08.280 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:08.280 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:08.539 00:35:08.539 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:35:08.539 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:35:08.539 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:35:08.798 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:08.798 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:35:08.798 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.798 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:08.798 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.798 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:35:08.798 { 00:35:08.798 "cntlid": 25, 00:35:08.798 "qid": 0, 00:35:08.798 "state": "enabled", 00:35:08.798 "thread": "nvmf_tgt_poll_group_000", 00:35:08.798 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:35:08.798 "listen_address": { 00:35:08.798 "trtype": "TCP", 00:35:08.798 "adrfam": "IPv4", 00:35:08.798 "traddr": "10.0.0.3", 00:35:08.798 "trsvcid": "4420" 00:35:08.798 }, 00:35:08.798 "peer_address": { 00:35:08.798 "trtype": "TCP", 00:35:08.798 "adrfam": "IPv4", 00:35:08.798 "traddr": "10.0.0.1", 00:35:08.798 "trsvcid": "41274" 00:35:08.798 }, 00:35:08.798 "auth": { 00:35:08.798 "state": "completed", 00:35:08.798 "digest": "sha256", 00:35:08.798 "dhgroup": "ffdhe4096" 00:35:08.798 } 00:35:08.798 } 00:35:08.798 ]' 00:35:08.798 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:35:08.798 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:35:08.798 13:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:35:08.798 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:35:08.798 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:35:08.798 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:35:08.798 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:35:08.798 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:35:09.056 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDRmMDhhYmJjOTY1NzJiYzYwMzZlMTJmOWQ1YzhmYjU2NjEzOGFmOGRlMGEyOTE5gXY1NA==: --dhchap-ctrl-secret DHHC-1:03:MjNiZjU3YjY2NWU1MTQ1NzNkZWJjMTk0MTRhOTQ5ODI2ZDliMzZmODI3NzExYmE2ODFiMzg1MTFjODc2OTIwZb/bPv4=: 00:35:09.056 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:00:NDRmMDhhYmJjOTY1NzJiYzYwMzZlMTJmOWQ1YzhmYjU2NjEzOGFmOGRlMGEyOTE5gXY1NA==: --dhchap-ctrl-secret DHHC-1:03:MjNiZjU3YjY2NWU1MTQ1NzNkZWJjMTk0MTRhOTQ5ODI2ZDliMzZmODI3NzExYmE2ODFiMzg1MTFjODc2OTIwZb/bPv4=: 00:35:09.624 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:35:09.624 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:35:09.624 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:35:09.624 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.624 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:09.624 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.624 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:35:09.624 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:09.624 13:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:09.884 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:35:09.884 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:35:09.884 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:35:09.884 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:35:09.884 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:35:09.884 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:35:09.884 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:09.884 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.884 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:09.884 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.884 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:09.884 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:09.884 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:10.144 00:35:10.144 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:35:10.144 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:35:10.144 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:35:10.402 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:10.402 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:35:10.402 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.402 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:10.402 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.402 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:35:10.402 { 00:35:10.402 "cntlid": 27, 00:35:10.402 "qid": 0, 00:35:10.402 "state": "enabled", 00:35:10.402 "thread": "nvmf_tgt_poll_group_000", 00:35:10.402 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:35:10.402 "listen_address": { 00:35:10.402 "trtype": "TCP", 00:35:10.402 "adrfam": "IPv4", 00:35:10.402 "traddr": "10.0.0.3", 00:35:10.402 "trsvcid": "4420" 00:35:10.402 }, 00:35:10.402 "peer_address": { 00:35:10.402 "trtype": "TCP", 00:35:10.402 "adrfam": "IPv4", 00:35:10.402 "traddr": "10.0.0.1", 00:35:10.402 "trsvcid": "41306" 00:35:10.402 }, 00:35:10.402 "auth": { 00:35:10.402 "state": "completed", 00:35:10.402 "digest": "sha256", 00:35:10.402 "dhgroup": "ffdhe4096" 00:35:10.402 } 00:35:10.402 } 00:35:10.402 ]' 00:35:10.402 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:35:10.403 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:35:10.403 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:35:10.661 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:35:10.661 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:35:10.661 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:35:10.661 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:35:10.661 13:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:35:10.920 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmFhMjQ2OTI0ZGE4NGFhODZlMTVhMTkyZjlkZWRiNWJI/dgb: --dhchap-ctrl-secret DHHC-1:02:ZDE1MjU0YTE2NTU4YjBjYmU2ZGM2MjNmNDVlZGUyYmIwM2E2ZWU1ODFhMzAwYTM2myPlpA==: 00:35:10.920 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:01:YmFhMjQ2OTI0ZGE4NGFhODZlMTVhMTkyZjlkZWRiNWJI/dgb: --dhchap-ctrl-secret DHHC-1:02:ZDE1MjU0YTE2NTU4YjBjYmU2ZGM2MjNmNDVlZGUyYmIwM2E2ZWU1ODFhMzAwYTM2myPlpA==: 00:35:11.488 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:35:11.488 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:35:11.488 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:35:11.488 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.488 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:11.488 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.488 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:35:11.488 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:11.488 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:11.488 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:35:11.488 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:35:11.488 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:35:11.488 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:35:11.488 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:35:11.488 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:35:11.488 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:11.488 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.488 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:11.488 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.489 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:11.489 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:11.489 13:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:12.056 00:35:12.056 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:35:12.056 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:35:12.057 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:35:12.057 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:12.057 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:35:12.057 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.057 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:12.057 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.057 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:35:12.057 { 00:35:12.057 "cntlid": 29, 00:35:12.057 "qid": 0, 00:35:12.057 "state": "enabled", 00:35:12.057 "thread": "nvmf_tgt_poll_group_000", 00:35:12.057 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:35:12.057 "listen_address": { 00:35:12.057 "trtype": "TCP", 00:35:12.057 "adrfam": "IPv4", 00:35:12.057 "traddr": "10.0.0.3", 00:35:12.057 "trsvcid": "4420" 00:35:12.057 }, 00:35:12.057 "peer_address": { 00:35:12.057 "trtype": "TCP", 00:35:12.057 "adrfam": "IPv4", 00:35:12.057 "traddr": "10.0.0.1", 00:35:12.057 "trsvcid": "41324" 00:35:12.057 }, 00:35:12.057 "auth": { 00:35:12.057 "state": "completed", 00:35:12.057 "digest": "sha256", 00:35:12.057 "dhgroup": "ffdhe4096" 00:35:12.057 } 00:35:12.057 } 00:35:12.057 ]' 00:35:12.057 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:35:12.314 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:35:12.314 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:35:12.314 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:35:12.314 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:35:12.314 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:35:12.314 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:35:12.314 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:35:12.573 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmZkYzlhNjM4ZDAzN2FlYjUzYjg0YzEwNDZlODkzNGNjZWRiZjk3ZDZkYWZhOGM3YSt+tw==: --dhchap-ctrl-secret DHHC-1:01:OWEzZTBjOTMzNGE3MmVhZWVkYTA3ZDJlYTU0OTQ2OWNbZz1G: 00:35:12.573 13:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:02:ZmZkYzlhNjM4ZDAzN2FlYjUzYjg0YzEwNDZlODkzNGNjZWRiZjk3ZDZkYWZhOGM3YSt+tw==: --dhchap-ctrl-secret DHHC-1:01:OWEzZTBjOTMzNGE3MmVhZWVkYTA3ZDJlYTU0OTQ2OWNbZz1G: 00:35:13.144 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:35:13.144 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:35:13.144 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:35:13.144 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.144 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:13.144 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.144 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:35:13.144 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:13.144 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:13.403 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:35:13.403 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:35:13.403 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:35:13.403 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:35:13.403 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:35:13.403 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:35:13.403 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key3 00:35:13.403 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.403 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:13.403 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.403 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:35:13.403 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:35:13.403 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:35:13.661 00:35:13.661 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:35:13.661 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:35:13.661 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:35:13.920 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:13.920 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:35:13.920 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.920 13:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:13.920 13:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.920 13:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:35:13.920 { 00:35:13.920 "cntlid": 31, 00:35:13.920 "qid": 0, 00:35:13.920 "state": "enabled", 00:35:13.920 "thread": "nvmf_tgt_poll_group_000", 00:35:13.920 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:35:13.920 "listen_address": { 00:35:13.920 "trtype": "TCP", 00:35:13.920 "adrfam": "IPv4", 00:35:13.920 "traddr": "10.0.0.3", 00:35:13.920 "trsvcid": "4420" 00:35:13.920 }, 00:35:13.920 "peer_address": { 00:35:13.920 "trtype": "TCP", 00:35:13.920 "adrfam": "IPv4", 00:35:13.920 "traddr": "10.0.0.1", 00:35:13.920 "trsvcid": "47896" 00:35:13.920 }, 00:35:13.920 "auth": { 00:35:13.920 "state": "completed", 00:35:13.920 "digest": "sha256", 00:35:13.920 "dhgroup": "ffdhe4096" 00:35:13.920 } 00:35:13.920 } 00:35:13.920 ]' 00:35:13.920 13:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:35:13.920 13:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:35:13.920 13:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:35:13.920 13:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:35:13.920 13:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:35:13.920 13:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:35:13.920 13:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:35:13.920 13:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:35:14.180 13:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Nzc5MWYwYmYwNzgxNjQxNzkyMTBkNGU2OGU1ZTU0OWM5MmQxMmJiNjc5NzMyODczNTEwZjQ3YzVhYjJiZWQxOUnX21U=: 00:35:14.180 13:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:03:Nzc5MWYwYmYwNzgxNjQxNzkyMTBkNGU2OGU1ZTU0OWM5MmQxMmJiNjc5NzMyODczNTEwZjQ3YzVhYjJiZWQxOUnX21U=: 00:35:14.750 13:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:35:14.750 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:35:14.750 13:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:35:14.750 13:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.750 13:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:14.750 13:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.750 13:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:35:14.750 13:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:35:14.750 13:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:14.750 13:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:15.008 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:35:15.008 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:35:15.008 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:35:15.008 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:35:15.008 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:35:15.008 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:35:15.008 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:15.008 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.008 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:15.008 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.008 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:15.008 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:15.008 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:15.268 00:35:15.268 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:35:15.268 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:35:15.268 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:35:15.527 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:15.527 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:35:15.528 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.528 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:15.528 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.528 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:35:15.528 { 00:35:15.528 "cntlid": 33, 00:35:15.528 "qid": 0, 00:35:15.528 "state": "enabled", 00:35:15.528 "thread": "nvmf_tgt_poll_group_000", 00:35:15.528 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:35:15.528 "listen_address": { 00:35:15.528 "trtype": "TCP", 00:35:15.528 "adrfam": "IPv4", 00:35:15.528 "traddr": "10.0.0.3", 00:35:15.528 "trsvcid": "4420" 00:35:15.528 }, 00:35:15.528 "peer_address": { 00:35:15.528 "trtype": "TCP", 00:35:15.528 "adrfam": "IPv4", 00:35:15.528 "traddr": "10.0.0.1", 00:35:15.528 "trsvcid": "47924" 00:35:15.528 }, 00:35:15.528 "auth": { 00:35:15.528 "state": "completed", 00:35:15.528 "digest": "sha256", 00:35:15.528 "dhgroup": "ffdhe6144" 00:35:15.528 } 00:35:15.528 } 00:35:15.528 ]' 00:35:15.528 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:35:15.528 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:35:15.528 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:35:15.528 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:35:15.528 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:35:15.528 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:35:15.528 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:35:15.528 13:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:35:15.787 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDRmMDhhYmJjOTY1NzJiYzYwMzZlMTJmOWQ1YzhmYjU2NjEzOGFmOGRlMGEyOTE5gXY1NA==: --dhchap-ctrl-secret DHHC-1:03:MjNiZjU3YjY2NWU1MTQ1NzNkZWJjMTk0MTRhOTQ5ODI2ZDliMzZmODI3NzExYmE2ODFiMzg1MTFjODc2OTIwZb/bPv4=: 00:35:15.787 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:00:NDRmMDhhYmJjOTY1NzJiYzYwMzZlMTJmOWQ1YzhmYjU2NjEzOGFmOGRlMGEyOTE5gXY1NA==: --dhchap-ctrl-secret DHHC-1:03:MjNiZjU3YjY2NWU1MTQ1NzNkZWJjMTk0MTRhOTQ5ODI2ZDliMzZmODI3NzExYmE2ODFiMzg1MTFjODc2OTIwZb/bPv4=: 00:35:16.355 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:35:16.355 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:35:16.355 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:35:16.355 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.355 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:16.355 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.356 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:35:16.356 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:16.356 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:16.615 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:35:16.615 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:35:16.615 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:35:16.615 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:35:16.615 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:35:16.615 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:35:16.615 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:16.615 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.615 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:16.615 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.615 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:16.615 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:16.615 13:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:16.874 00:35:16.874 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:35:16.874 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:35:16.874 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:35:17.132 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:17.132 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:35:17.132 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.132 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:17.132 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.132 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:35:17.132 { 00:35:17.132 "cntlid": 35, 00:35:17.132 "qid": 0, 00:35:17.132 "state": "enabled", 00:35:17.132 "thread": "nvmf_tgt_poll_group_000", 00:35:17.132 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:35:17.132 "listen_address": { 00:35:17.132 "trtype": "TCP", 00:35:17.132 "adrfam": "IPv4", 00:35:17.132 "traddr": "10.0.0.3", 00:35:17.132 "trsvcid": "4420" 00:35:17.132 }, 00:35:17.132 "peer_address": { 00:35:17.132 "trtype": "TCP", 00:35:17.132 "adrfam": "IPv4", 00:35:17.132 "traddr": "10.0.0.1", 00:35:17.132 "trsvcid": "47946" 00:35:17.132 }, 00:35:17.132 "auth": { 00:35:17.132 "state": "completed", 00:35:17.132 "digest": "sha256", 00:35:17.132 "dhgroup": "ffdhe6144" 00:35:17.132 } 00:35:17.132 } 00:35:17.132 ]' 00:35:17.132 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:35:17.132 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:35:17.132 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:35:17.132 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:35:17.132 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:35:17.392 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:35:17.392 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:35:17.392 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:35:17.392 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmFhMjQ2OTI0ZGE4NGFhODZlMTVhMTkyZjlkZWRiNWJI/dgb: --dhchap-ctrl-secret DHHC-1:02:ZDE1MjU0YTE2NTU4YjBjYmU2ZGM2MjNmNDVlZGUyYmIwM2E2ZWU1ODFhMzAwYTM2myPlpA==: 00:35:17.392 13:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:01:YmFhMjQ2OTI0ZGE4NGFhODZlMTVhMTkyZjlkZWRiNWJI/dgb: --dhchap-ctrl-secret DHHC-1:02:ZDE1MjU0YTE2NTU4YjBjYmU2ZGM2MjNmNDVlZGUyYmIwM2E2ZWU1ODFhMzAwYTM2myPlpA==: 00:35:17.959 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:35:17.959 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:35:17.959 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:35:17.960 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.960 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:17.960 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.960 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:35:17.960 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:17.960 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:18.219 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:35:18.219 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:35:18.219 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:35:18.219 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:35:18.219 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:35:18.219 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:35:18.219 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:18.219 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.219 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:18.219 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.219 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:18.219 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:18.219 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:18.787 00:35:18.787 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:35:18.787 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:35:18.787 13:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:35:18.787 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:18.788 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:35:18.788 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.788 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:18.788 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.788 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:35:18.788 { 00:35:18.788 "cntlid": 37, 00:35:18.788 "qid": 0, 00:35:18.788 "state": "enabled", 00:35:18.788 "thread": "nvmf_tgt_poll_group_000", 00:35:18.788 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:35:18.788 "listen_address": { 00:35:18.788 "trtype": "TCP", 00:35:18.788 "adrfam": "IPv4", 00:35:18.788 "traddr": "10.0.0.3", 00:35:18.788 "trsvcid": "4420" 00:35:18.788 }, 00:35:18.788 "peer_address": { 00:35:18.788 "trtype": "TCP", 00:35:18.788 "adrfam": "IPv4", 00:35:18.788 "traddr": "10.0.0.1", 00:35:18.788 "trsvcid": "47960" 00:35:18.788 }, 00:35:18.788 "auth": { 00:35:18.788 "state": "completed", 00:35:18.788 "digest": "sha256", 00:35:18.788 "dhgroup": "ffdhe6144" 00:35:18.788 } 00:35:18.788 } 00:35:18.788 ]' 00:35:18.788 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:35:18.788 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:35:18.788 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:35:19.047 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:35:19.047 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:35:19.047 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:35:19.047 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:35:19.047 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:35:19.047 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmZkYzlhNjM4ZDAzN2FlYjUzYjg0YzEwNDZlODkzNGNjZWRiZjk3ZDZkYWZhOGM3YSt+tw==: --dhchap-ctrl-secret DHHC-1:01:OWEzZTBjOTMzNGE3MmVhZWVkYTA3ZDJlYTU0OTQ2OWNbZz1G: 00:35:19.047 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:02:ZmZkYzlhNjM4ZDAzN2FlYjUzYjg0YzEwNDZlODkzNGNjZWRiZjk3ZDZkYWZhOGM3YSt+tw==: --dhchap-ctrl-secret DHHC-1:01:OWEzZTBjOTMzNGE3MmVhZWVkYTA3ZDJlYTU0OTQ2OWNbZz1G: 00:35:19.615 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:35:19.615 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:35:19.615 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:35:19.615 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.615 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:19.615 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.615 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:35:19.615 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:19.615 13:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:19.874 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:35:19.874 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:35:19.874 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:35:19.874 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:35:19.874 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:35:19.874 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:35:19.874 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key3 00:35:19.874 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.874 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:19.874 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.874 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:35:19.874 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:35:19.874 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:35:20.443 00:35:20.443 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:35:20.443 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:35:20.443 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:35:20.443 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:20.443 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:35:20.443 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.443 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:20.443 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.443 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:35:20.443 { 00:35:20.443 "cntlid": 39, 00:35:20.443 "qid": 0, 00:35:20.443 "state": "enabled", 00:35:20.443 "thread": "nvmf_tgt_poll_group_000", 00:35:20.443 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:35:20.443 "listen_address": { 00:35:20.443 "trtype": "TCP", 00:35:20.443 "adrfam": "IPv4", 00:35:20.443 "traddr": "10.0.0.3", 00:35:20.443 "trsvcid": "4420" 00:35:20.443 }, 00:35:20.443 "peer_address": { 00:35:20.443 "trtype": "TCP", 00:35:20.443 "adrfam": "IPv4", 00:35:20.443 "traddr": "10.0.0.1", 00:35:20.443 "trsvcid": "47994" 00:35:20.443 }, 00:35:20.443 "auth": { 00:35:20.443 "state": "completed", 00:35:20.443 "digest": "sha256", 00:35:20.443 "dhgroup": "ffdhe6144" 00:35:20.443 } 00:35:20.443 } 00:35:20.443 ]' 00:35:20.443 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:35:20.443 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:35:20.702 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:35:20.702 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:35:20.702 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:35:20.702 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:35:20.702 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:35:20.702 13:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:35:20.961 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Nzc5MWYwYmYwNzgxNjQxNzkyMTBkNGU2OGU1ZTU0OWM5MmQxMmJiNjc5NzMyODczNTEwZjQ3YzVhYjJiZWQxOUnX21U=: 00:35:20.961 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:03:Nzc5MWYwYmYwNzgxNjQxNzkyMTBkNGU2OGU1ZTU0OWM5MmQxMmJiNjc5NzMyODczNTEwZjQ3YzVhYjJiZWQxOUnX21U=: 00:35:21.530 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:35:21.530 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:35:21.530 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:35:21.530 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.530 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:21.530 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.530 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:35:21.530 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:35:21.530 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:21.530 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:21.530 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:35:21.530 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:35:21.530 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:35:21.530 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:35:21.530 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:35:21.530 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:35:21.530 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:21.530 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.530 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:21.789 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.789 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:21.789 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:21.789 13:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:22.056 00:35:22.323 13:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:35:22.323 13:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:35:22.323 13:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:35:22.323 13:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:22.323 13:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:35:22.323 13:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.323 13:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:22.323 13:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.323 13:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:35:22.323 { 00:35:22.323 "cntlid": 41, 00:35:22.323 "qid": 0, 00:35:22.323 "state": "enabled", 00:35:22.323 "thread": "nvmf_tgt_poll_group_000", 00:35:22.323 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:35:22.323 "listen_address": { 00:35:22.323 "trtype": "TCP", 00:35:22.323 "adrfam": "IPv4", 00:35:22.323 "traddr": "10.0.0.3", 00:35:22.323 "trsvcid": "4420" 00:35:22.323 }, 00:35:22.323 "peer_address": { 00:35:22.323 "trtype": "TCP", 00:35:22.323 "adrfam": "IPv4", 00:35:22.323 "traddr": "10.0.0.1", 00:35:22.323 "trsvcid": "48014" 00:35:22.323 }, 00:35:22.323 "auth": { 00:35:22.323 "state": "completed", 00:35:22.323 "digest": "sha256", 00:35:22.323 "dhgroup": "ffdhe8192" 00:35:22.323 } 00:35:22.323 } 00:35:22.323 ]' 00:35:22.323 13:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:35:22.582 13:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:35:22.582 13:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:35:22.582 13:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:35:22.582 13:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:35:22.582 13:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:35:22.582 13:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:35:22.582 13:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:35:22.840 13:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDRmMDhhYmJjOTY1NzJiYzYwMzZlMTJmOWQ1YzhmYjU2NjEzOGFmOGRlMGEyOTE5gXY1NA==: --dhchap-ctrl-secret DHHC-1:03:MjNiZjU3YjY2NWU1MTQ1NzNkZWJjMTk0MTRhOTQ5ODI2ZDliMzZmODI3NzExYmE2ODFiMzg1MTFjODc2OTIwZb/bPv4=: 00:35:22.840 13:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:00:NDRmMDhhYmJjOTY1NzJiYzYwMzZlMTJmOWQ1YzhmYjU2NjEzOGFmOGRlMGEyOTE5gXY1NA==: --dhchap-ctrl-secret DHHC-1:03:MjNiZjU3YjY2NWU1MTQ1NzNkZWJjMTk0MTRhOTQ5ODI2ZDliMzZmODI3NzExYmE2ODFiMzg1MTFjODc2OTIwZb/bPv4=: 00:35:23.407 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:35:23.407 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:35:23.407 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:35:23.407 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.407 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:23.407 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.407 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:35:23.408 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:23.408 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:23.408 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:35:23.408 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:35:23.408 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:35:23.408 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:35:23.408 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:35:23.408 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:35:23.408 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:23.408 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.408 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:23.666 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.666 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:23.666 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:23.666 13:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:23.930 00:35:24.193 13:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:35:24.193 13:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:35:24.193 13:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:35:24.193 13:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:24.193 13:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:35:24.193 13:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.193 13:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:24.193 13:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.193 13:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:35:24.193 { 00:35:24.193 "cntlid": 43, 00:35:24.193 "qid": 0, 00:35:24.193 "state": "enabled", 00:35:24.193 "thread": "nvmf_tgt_poll_group_000", 00:35:24.193 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:35:24.193 "listen_address": { 00:35:24.193 "trtype": "TCP", 00:35:24.193 "adrfam": "IPv4", 00:35:24.193 "traddr": "10.0.0.3", 00:35:24.193 "trsvcid": "4420" 00:35:24.193 }, 00:35:24.193 "peer_address": { 00:35:24.193 "trtype": "TCP", 00:35:24.193 "adrfam": "IPv4", 00:35:24.193 "traddr": "10.0.0.1", 00:35:24.193 "trsvcid": "44690" 00:35:24.193 }, 00:35:24.193 "auth": { 00:35:24.193 "state": "completed", 00:35:24.193 "digest": "sha256", 00:35:24.193 "dhgroup": "ffdhe8192" 00:35:24.193 } 00:35:24.193 } 00:35:24.193 ]' 00:35:24.193 13:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:35:24.458 13:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:35:24.458 13:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:35:24.458 13:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:35:24.458 13:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:35:24.458 13:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:35:24.458 13:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:35:24.458 13:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:35:24.723 13:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmFhMjQ2OTI0ZGE4NGFhODZlMTVhMTkyZjlkZWRiNWJI/dgb: --dhchap-ctrl-secret DHHC-1:02:ZDE1MjU0YTE2NTU4YjBjYmU2ZGM2MjNmNDVlZGUyYmIwM2E2ZWU1ODFhMzAwYTM2myPlpA==: 00:35:24.723 13:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:01:YmFhMjQ2OTI0ZGE4NGFhODZlMTVhMTkyZjlkZWRiNWJI/dgb: --dhchap-ctrl-secret DHHC-1:02:ZDE1MjU0YTE2NTU4YjBjYmU2ZGM2MjNmNDVlZGUyYmIwM2E2ZWU1ODFhMzAwYTM2myPlpA==: 00:35:25.294 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:35:25.294 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:35:25.294 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:35:25.294 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.294 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:25.294 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.294 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:35:25.294 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:25.294 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:25.294 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:35:25.294 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:35:25.294 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:35:25.294 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:35:25.294 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:35:25.294 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:35:25.294 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:25.294 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.294 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:25.294 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.294 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:25.294 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:25.294 13:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:25.863 00:35:25.863 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:35:25.863 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:35:25.863 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:35:26.122 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:26.122 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:35:26.122 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.122 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:26.122 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.122 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:35:26.122 { 00:35:26.122 "cntlid": 45, 00:35:26.122 "qid": 0, 00:35:26.122 "state": "enabled", 00:35:26.122 "thread": "nvmf_tgt_poll_group_000", 00:35:26.122 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:35:26.122 "listen_address": { 00:35:26.122 "trtype": "TCP", 00:35:26.122 "adrfam": "IPv4", 00:35:26.122 "traddr": "10.0.0.3", 00:35:26.122 "trsvcid": "4420" 00:35:26.122 }, 00:35:26.122 "peer_address": { 00:35:26.122 "trtype": "TCP", 00:35:26.122 "adrfam": "IPv4", 00:35:26.122 "traddr": "10.0.0.1", 00:35:26.122 "trsvcid": "44714" 00:35:26.122 }, 00:35:26.122 "auth": { 00:35:26.122 "state": "completed", 00:35:26.122 "digest": "sha256", 00:35:26.122 "dhgroup": "ffdhe8192" 00:35:26.122 } 00:35:26.122 } 00:35:26.122 ]' 00:35:26.122 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:35:26.122 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:35:26.122 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:35:26.122 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:35:26.122 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:35:26.381 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:35:26.381 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:35:26.381 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:35:26.381 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmZkYzlhNjM4ZDAzN2FlYjUzYjg0YzEwNDZlODkzNGNjZWRiZjk3ZDZkYWZhOGM3YSt+tw==: --dhchap-ctrl-secret DHHC-1:01:OWEzZTBjOTMzNGE3MmVhZWVkYTA3ZDJlYTU0OTQ2OWNbZz1G: 00:35:26.381 13:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:02:ZmZkYzlhNjM4ZDAzN2FlYjUzYjg0YzEwNDZlODkzNGNjZWRiZjk3ZDZkYWZhOGM3YSt+tw==: --dhchap-ctrl-secret DHHC-1:01:OWEzZTBjOTMzNGE3MmVhZWVkYTA3ZDJlYTU0OTQ2OWNbZz1G: 00:35:26.950 13:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:35:26.950 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:35:26.950 13:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:35:26.950 13:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.950 13:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:26.950 13:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.950 13:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:35:26.950 13:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:26.950 13:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:27.209 13:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:35:27.209 13:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:35:27.209 13:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:35:27.209 13:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:35:27.209 13:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:35:27.209 13:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:35:27.209 13:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key3 00:35:27.209 13:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.209 13:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:27.209 13:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.209 13:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:35:27.209 13:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:35:27.209 13:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:35:27.808 00:35:27.808 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:35:27.808 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:35:27.808 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:35:28.067 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:28.067 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:35:28.067 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.067 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:28.067 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.067 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:35:28.067 { 00:35:28.067 "cntlid": 47, 00:35:28.067 "qid": 0, 00:35:28.067 "state": "enabled", 00:35:28.067 "thread": "nvmf_tgt_poll_group_000", 00:35:28.067 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:35:28.067 "listen_address": { 00:35:28.067 "trtype": "TCP", 00:35:28.067 "adrfam": "IPv4", 00:35:28.067 "traddr": "10.0.0.3", 00:35:28.067 "trsvcid": "4420" 00:35:28.067 }, 00:35:28.067 "peer_address": { 00:35:28.067 "trtype": "TCP", 00:35:28.067 "adrfam": "IPv4", 00:35:28.067 "traddr": "10.0.0.1", 00:35:28.067 "trsvcid": "44732" 00:35:28.067 }, 00:35:28.067 "auth": { 00:35:28.067 "state": "completed", 00:35:28.067 "digest": "sha256", 00:35:28.067 "dhgroup": "ffdhe8192" 00:35:28.067 } 00:35:28.067 } 00:35:28.067 ]' 00:35:28.067 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:35:28.067 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:35:28.067 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:35:28.067 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:35:28.067 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:35:28.067 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:35:28.067 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:35:28.067 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:35:28.326 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Nzc5MWYwYmYwNzgxNjQxNzkyMTBkNGU2OGU1ZTU0OWM5MmQxMmJiNjc5NzMyODczNTEwZjQ3YzVhYjJiZWQxOUnX21U=: 00:35:28.326 13:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:03:Nzc5MWYwYmYwNzgxNjQxNzkyMTBkNGU2OGU1ZTU0OWM5MmQxMmJiNjc5NzMyODczNTEwZjQ3YzVhYjJiZWQxOUnX21U=: 00:35:28.895 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:35:28.895 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:35:28.895 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:35:28.895 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.895 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:28.895 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.895 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:35:28.895 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:35:28.895 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:35:28.895 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:35:28.895 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:35:29.154 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:35:29.154 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:35:29.154 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:35:29.154 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:35:29.154 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:35:29.154 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:35:29.154 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:29.154 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.154 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:29.154 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.154 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:29.154 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:29.154 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:29.413 00:35:29.413 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:35:29.413 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:35:29.413 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:35:29.671 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:29.671 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:35:29.671 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.671 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:29.671 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.671 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:35:29.671 { 00:35:29.671 "cntlid": 49, 00:35:29.671 "qid": 0, 00:35:29.671 "state": "enabled", 00:35:29.671 "thread": "nvmf_tgt_poll_group_000", 00:35:29.671 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:35:29.671 "listen_address": { 00:35:29.671 "trtype": "TCP", 00:35:29.671 "adrfam": "IPv4", 00:35:29.671 "traddr": "10.0.0.3", 00:35:29.671 "trsvcid": "4420" 00:35:29.671 }, 00:35:29.671 "peer_address": { 00:35:29.671 "trtype": "TCP", 00:35:29.671 "adrfam": "IPv4", 00:35:29.671 "traddr": "10.0.0.1", 00:35:29.671 "trsvcid": "44754" 00:35:29.671 }, 00:35:29.671 "auth": { 00:35:29.671 "state": "completed", 00:35:29.671 "digest": "sha384", 00:35:29.671 "dhgroup": "null" 00:35:29.671 } 00:35:29.671 } 00:35:29.671 ]' 00:35:29.671 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:35:29.671 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:35:29.671 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:35:29.671 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:35:29.671 13:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:35:29.930 13:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:35:29.930 13:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:35:29.930 13:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:35:29.930 13:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDRmMDhhYmJjOTY1NzJiYzYwMzZlMTJmOWQ1YzhmYjU2NjEzOGFmOGRlMGEyOTE5gXY1NA==: --dhchap-ctrl-secret DHHC-1:03:MjNiZjU3YjY2NWU1MTQ1NzNkZWJjMTk0MTRhOTQ5ODI2ZDliMzZmODI3NzExYmE2ODFiMzg1MTFjODc2OTIwZb/bPv4=: 00:35:29.930 13:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:00:NDRmMDhhYmJjOTY1NzJiYzYwMzZlMTJmOWQ1YzhmYjU2NjEzOGFmOGRlMGEyOTE5gXY1NA==: --dhchap-ctrl-secret DHHC-1:03:MjNiZjU3YjY2NWU1MTQ1NzNkZWJjMTk0MTRhOTQ5ODI2ZDliMzZmODI3NzExYmE2ODFiMzg1MTFjODc2OTIwZb/bPv4=: 00:35:30.497 13:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:35:30.497 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:35:30.497 13:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:35:30.497 13:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.497 13:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:30.497 13:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.497 13:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:35:30.497 13:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:35:30.497 13:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:35:30.755 13:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:35:30.755 13:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:35:30.755 13:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:35:30.755 13:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:35:30.755 13:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:35:30.755 13:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:35:30.755 13:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:30.755 13:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.755 13:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:30.755 13:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.755 13:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:30.755 13:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:30.755 13:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:31.014 00:35:31.014 13:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:35:31.014 13:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:35:31.014 13:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:35:31.273 13:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:31.273 13:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:35:31.273 13:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.273 13:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:31.273 13:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.273 13:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:35:31.273 { 00:35:31.273 "cntlid": 51, 00:35:31.273 "qid": 0, 00:35:31.273 "state": "enabled", 00:35:31.273 "thread": "nvmf_tgt_poll_group_000", 00:35:31.273 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:35:31.273 "listen_address": { 00:35:31.273 "trtype": "TCP", 00:35:31.273 "adrfam": "IPv4", 00:35:31.273 "traddr": "10.0.0.3", 00:35:31.273 "trsvcid": "4420" 00:35:31.273 }, 00:35:31.273 "peer_address": { 00:35:31.273 "trtype": "TCP", 00:35:31.273 "adrfam": "IPv4", 00:35:31.273 "traddr": "10.0.0.1", 00:35:31.273 "trsvcid": "44784" 00:35:31.273 }, 00:35:31.273 "auth": { 00:35:31.273 "state": "completed", 00:35:31.273 "digest": "sha384", 00:35:31.273 "dhgroup": "null" 00:35:31.273 } 00:35:31.273 } 00:35:31.273 ]' 00:35:31.273 13:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:35:31.273 13:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:35:31.273 13:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:35:31.273 13:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:35:31.273 13:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:35:31.273 13:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:35:31.273 13:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:35:31.273 13:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:35:31.531 13:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmFhMjQ2OTI0ZGE4NGFhODZlMTVhMTkyZjlkZWRiNWJI/dgb: --dhchap-ctrl-secret DHHC-1:02:ZDE1MjU0YTE2NTU4YjBjYmU2ZGM2MjNmNDVlZGUyYmIwM2E2ZWU1ODFhMzAwYTM2myPlpA==: 00:35:31.531 13:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:01:YmFhMjQ2OTI0ZGE4NGFhODZlMTVhMTkyZjlkZWRiNWJI/dgb: --dhchap-ctrl-secret DHHC-1:02:ZDE1MjU0YTE2NTU4YjBjYmU2ZGM2MjNmNDVlZGUyYmIwM2E2ZWU1ODFhMzAwYTM2myPlpA==: 00:35:32.098 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:35:32.098 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:35:32.098 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:35:32.098 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.098 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:32.098 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.098 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:35:32.098 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:35:32.098 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:35:32.358 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:35:32.358 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:35:32.358 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:35:32.358 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:35:32.358 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:35:32.358 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:35:32.358 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:32.358 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.358 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:32.358 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.358 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:32.358 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:32.358 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:32.617 00:35:32.617 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:35:32.617 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:35:32.617 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:35:32.877 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:32.877 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:35:32.877 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.877 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:32.877 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.877 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:35:32.877 { 00:35:32.877 "cntlid": 53, 00:35:32.877 "qid": 0, 00:35:32.877 "state": "enabled", 00:35:32.877 "thread": "nvmf_tgt_poll_group_000", 00:35:32.877 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:35:32.877 "listen_address": { 00:35:32.877 "trtype": "TCP", 00:35:32.877 "adrfam": "IPv4", 00:35:32.877 "traddr": "10.0.0.3", 00:35:32.877 "trsvcid": "4420" 00:35:32.877 }, 00:35:32.877 "peer_address": { 00:35:32.877 "trtype": "TCP", 00:35:32.877 "adrfam": "IPv4", 00:35:32.877 "traddr": "10.0.0.1", 00:35:32.877 "trsvcid": "37446" 00:35:32.877 }, 00:35:32.877 "auth": { 00:35:32.877 "state": "completed", 00:35:32.877 "digest": "sha384", 00:35:32.877 "dhgroup": "null" 00:35:32.877 } 00:35:32.877 } 00:35:32.877 ]' 00:35:32.877 13:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:35:32.877 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:35:32.877 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:35:32.877 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:35:32.877 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:35:32.877 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:35:32.877 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:35:32.877 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:35:33.138 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmZkYzlhNjM4ZDAzN2FlYjUzYjg0YzEwNDZlODkzNGNjZWRiZjk3ZDZkYWZhOGM3YSt+tw==: --dhchap-ctrl-secret DHHC-1:01:OWEzZTBjOTMzNGE3MmVhZWVkYTA3ZDJlYTU0OTQ2OWNbZz1G: 00:35:33.138 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:02:ZmZkYzlhNjM4ZDAzN2FlYjUzYjg0YzEwNDZlODkzNGNjZWRiZjk3ZDZkYWZhOGM3YSt+tw==: --dhchap-ctrl-secret DHHC-1:01:OWEzZTBjOTMzNGE3MmVhZWVkYTA3ZDJlYTU0OTQ2OWNbZz1G: 00:35:33.710 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:35:33.710 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:35:33.710 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:35:33.710 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.710 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:33.710 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.710 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:35:33.710 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:35:33.710 13:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:35:33.969 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:35:33.969 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:35:33.969 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:35:33.969 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:35:33.969 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:35:33.969 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:35:33.969 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key3 00:35:33.969 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.969 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:33.969 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.969 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:35:33.969 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:35:33.969 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:35:34.228 00:35:34.228 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:35:34.228 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:35:34.228 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:35:34.488 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:34.488 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:35:34.488 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.488 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:34.488 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.488 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:35:34.488 { 00:35:34.488 "cntlid": 55, 00:35:34.488 "qid": 0, 00:35:34.488 "state": "enabled", 00:35:34.488 "thread": "nvmf_tgt_poll_group_000", 00:35:34.488 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:35:34.488 "listen_address": { 00:35:34.488 "trtype": "TCP", 00:35:34.488 "adrfam": "IPv4", 00:35:34.488 "traddr": "10.0.0.3", 00:35:34.488 "trsvcid": "4420" 00:35:34.488 }, 00:35:34.488 "peer_address": { 00:35:34.488 "trtype": "TCP", 00:35:34.488 "adrfam": "IPv4", 00:35:34.488 "traddr": "10.0.0.1", 00:35:34.488 "trsvcid": "37468" 00:35:34.488 }, 00:35:34.488 "auth": { 00:35:34.488 "state": "completed", 00:35:34.488 "digest": "sha384", 00:35:34.488 "dhgroup": "null" 00:35:34.488 } 00:35:34.488 } 00:35:34.488 ]' 00:35:34.488 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:35:34.488 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:35:34.488 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:35:34.488 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:35:34.488 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:35:34.488 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:35:34.488 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:35:34.488 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:35:34.748 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Nzc5MWYwYmYwNzgxNjQxNzkyMTBkNGU2OGU1ZTU0OWM5MmQxMmJiNjc5NzMyODczNTEwZjQ3YzVhYjJiZWQxOUnX21U=: 00:35:34.748 13:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:03:Nzc5MWYwYmYwNzgxNjQxNzkyMTBkNGU2OGU1ZTU0OWM5MmQxMmJiNjc5NzMyODczNTEwZjQ3YzVhYjJiZWQxOUnX21U=: 00:35:35.318 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:35:35.318 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:35:35.318 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:35:35.318 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.318 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:35.318 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.318 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:35:35.318 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:35:35.318 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:35.318 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:35.577 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:35:35.577 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:35:35.577 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:35:35.577 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:35:35.577 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:35:35.577 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:35:35.577 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:35.577 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.577 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:35.577 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.577 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:35.577 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:35.577 13:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:35.837 00:35:35.837 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:35:35.837 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:35:35.837 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:35:36.097 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:36.097 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:35:36.097 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.097 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:36.097 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.097 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:35:36.097 { 00:35:36.097 "cntlid": 57, 00:35:36.097 "qid": 0, 00:35:36.097 "state": "enabled", 00:35:36.097 "thread": "nvmf_tgt_poll_group_000", 00:35:36.097 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:35:36.097 "listen_address": { 00:35:36.097 "trtype": "TCP", 00:35:36.097 "adrfam": "IPv4", 00:35:36.097 "traddr": "10.0.0.3", 00:35:36.097 "trsvcid": "4420" 00:35:36.097 }, 00:35:36.097 "peer_address": { 00:35:36.097 "trtype": "TCP", 00:35:36.097 "adrfam": "IPv4", 00:35:36.097 "traddr": "10.0.0.1", 00:35:36.097 "trsvcid": "37500" 00:35:36.097 }, 00:35:36.097 "auth": { 00:35:36.097 "state": "completed", 00:35:36.097 "digest": "sha384", 00:35:36.097 "dhgroup": "ffdhe2048" 00:35:36.097 } 00:35:36.097 } 00:35:36.097 ]' 00:35:36.097 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:35:36.097 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:35:36.097 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:35:36.097 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:35:36.097 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:35:36.097 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:35:36.097 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:35:36.097 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:35:36.357 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDRmMDhhYmJjOTY1NzJiYzYwMzZlMTJmOWQ1YzhmYjU2NjEzOGFmOGRlMGEyOTE5gXY1NA==: --dhchap-ctrl-secret DHHC-1:03:MjNiZjU3YjY2NWU1MTQ1NzNkZWJjMTk0MTRhOTQ5ODI2ZDliMzZmODI3NzExYmE2ODFiMzg1MTFjODc2OTIwZb/bPv4=: 00:35:36.357 13:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:00:NDRmMDhhYmJjOTY1NzJiYzYwMzZlMTJmOWQ1YzhmYjU2NjEzOGFmOGRlMGEyOTE5gXY1NA==: --dhchap-ctrl-secret DHHC-1:03:MjNiZjU3YjY2NWU1MTQ1NzNkZWJjMTk0MTRhOTQ5ODI2ZDliMzZmODI3NzExYmE2ODFiMzg1MTFjODc2OTIwZb/bPv4=: 00:35:36.926 13:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:35:36.926 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:35:36.926 13:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:35:36.926 13:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.926 13:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:36.926 13:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.926 13:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:35:36.926 13:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:36.926 13:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:37.185 13:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:35:37.186 13:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:35:37.186 13:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:35:37.186 13:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:35:37.186 13:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:35:37.186 13:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:35:37.186 13:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:37.186 13:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.186 13:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:37.186 13:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.186 13:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:37.186 13:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:37.186 13:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:37.446 00:35:37.446 13:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:35:37.446 13:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:35:37.446 13:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:35:37.706 13:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:37.706 13:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:35:37.706 13:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.706 13:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:37.706 13:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.706 13:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:35:37.706 { 00:35:37.706 "cntlid": 59, 00:35:37.706 "qid": 0, 00:35:37.706 "state": "enabled", 00:35:37.706 "thread": "nvmf_tgt_poll_group_000", 00:35:37.706 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:35:37.706 "listen_address": { 00:35:37.706 "trtype": "TCP", 00:35:37.706 "adrfam": "IPv4", 00:35:37.706 "traddr": "10.0.0.3", 00:35:37.706 "trsvcid": "4420" 00:35:37.706 }, 00:35:37.706 "peer_address": { 00:35:37.706 "trtype": "TCP", 00:35:37.706 "adrfam": "IPv4", 00:35:37.706 "traddr": "10.0.0.1", 00:35:37.706 "trsvcid": "37524" 00:35:37.706 }, 00:35:37.706 "auth": { 00:35:37.706 "state": "completed", 00:35:37.706 "digest": "sha384", 00:35:37.706 "dhgroup": "ffdhe2048" 00:35:37.706 } 00:35:37.706 } 00:35:37.706 ]' 00:35:37.706 13:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:35:37.706 13:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:35:37.706 13:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:35:37.706 13:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:35:37.706 13:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:35:37.706 13:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:35:37.706 13:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:35:37.706 13:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:35:37.966 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmFhMjQ2OTI0ZGE4NGFhODZlMTVhMTkyZjlkZWRiNWJI/dgb: --dhchap-ctrl-secret DHHC-1:02:ZDE1MjU0YTE2NTU4YjBjYmU2ZGM2MjNmNDVlZGUyYmIwM2E2ZWU1ODFhMzAwYTM2myPlpA==: 00:35:37.966 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:01:YmFhMjQ2OTI0ZGE4NGFhODZlMTVhMTkyZjlkZWRiNWJI/dgb: --dhchap-ctrl-secret DHHC-1:02:ZDE1MjU0YTE2NTU4YjBjYmU2ZGM2MjNmNDVlZGUyYmIwM2E2ZWU1ODFhMzAwYTM2myPlpA==: 00:35:38.535 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:35:38.535 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:35:38.535 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:35:38.535 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.535 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:38.535 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.535 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:35:38.535 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:38.535 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:38.794 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:35:38.795 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:35:38.795 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:35:38.795 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:35:38.795 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:35:38.795 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:35:38.795 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:38.795 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.795 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:38.795 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.795 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:38.795 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:38.795 13:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:39.054 00:35:39.054 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:35:39.054 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:35:39.054 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:35:39.314 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:39.314 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:35:39.314 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.314 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:39.314 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.314 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:35:39.314 { 00:35:39.314 "cntlid": 61, 00:35:39.314 "qid": 0, 00:35:39.314 "state": "enabled", 00:35:39.314 "thread": "nvmf_tgt_poll_group_000", 00:35:39.314 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:35:39.314 "listen_address": { 00:35:39.314 "trtype": "TCP", 00:35:39.314 "adrfam": "IPv4", 00:35:39.314 "traddr": "10.0.0.3", 00:35:39.314 "trsvcid": "4420" 00:35:39.314 }, 00:35:39.314 "peer_address": { 00:35:39.314 "trtype": "TCP", 00:35:39.314 "adrfam": "IPv4", 00:35:39.314 "traddr": "10.0.0.1", 00:35:39.314 "trsvcid": "37550" 00:35:39.314 }, 00:35:39.314 "auth": { 00:35:39.314 "state": "completed", 00:35:39.314 "digest": "sha384", 00:35:39.314 "dhgroup": "ffdhe2048" 00:35:39.314 } 00:35:39.314 } 00:35:39.314 ]' 00:35:39.314 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:35:39.314 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:35:39.314 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:35:39.314 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:35:39.314 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:35:39.314 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:35:39.314 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:35:39.314 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:35:39.574 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmZkYzlhNjM4ZDAzN2FlYjUzYjg0YzEwNDZlODkzNGNjZWRiZjk3ZDZkYWZhOGM3YSt+tw==: --dhchap-ctrl-secret DHHC-1:01:OWEzZTBjOTMzNGE3MmVhZWVkYTA3ZDJlYTU0OTQ2OWNbZz1G: 00:35:39.574 13:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:02:ZmZkYzlhNjM4ZDAzN2FlYjUzYjg0YzEwNDZlODkzNGNjZWRiZjk3ZDZkYWZhOGM3YSt+tw==: --dhchap-ctrl-secret DHHC-1:01:OWEzZTBjOTMzNGE3MmVhZWVkYTA3ZDJlYTU0OTQ2OWNbZz1G: 00:35:40.142 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:35:40.142 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:35:40.142 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:35:40.142 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.142 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:40.142 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.142 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:35:40.142 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:40.142 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:40.401 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:35:40.401 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:35:40.401 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:35:40.401 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:35:40.401 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:35:40.401 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:35:40.401 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key3 00:35:40.401 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.401 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:40.401 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.401 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:35:40.401 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:35:40.401 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:35:40.661 00:35:40.661 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:35:40.661 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:35:40.661 13:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:35:40.921 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:40.921 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:35:40.921 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.921 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:40.921 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.921 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:35:40.921 { 00:35:40.921 "cntlid": 63, 00:35:40.921 "qid": 0, 00:35:40.921 "state": "enabled", 00:35:40.921 "thread": "nvmf_tgt_poll_group_000", 00:35:40.921 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:35:40.921 "listen_address": { 00:35:40.921 "trtype": "TCP", 00:35:40.921 "adrfam": "IPv4", 00:35:40.921 "traddr": "10.0.0.3", 00:35:40.921 "trsvcid": "4420" 00:35:40.921 }, 00:35:40.921 "peer_address": { 00:35:40.921 "trtype": "TCP", 00:35:40.921 "adrfam": "IPv4", 00:35:40.921 "traddr": "10.0.0.1", 00:35:40.921 "trsvcid": "37574" 00:35:40.921 }, 00:35:40.921 "auth": { 00:35:40.921 "state": "completed", 00:35:40.921 "digest": "sha384", 00:35:40.921 "dhgroup": "ffdhe2048" 00:35:40.921 } 00:35:40.921 } 00:35:40.921 ]' 00:35:40.921 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:35:40.921 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:35:40.921 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:35:40.921 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:35:40.921 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:35:40.921 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:35:40.921 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:35:40.921 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:35:41.180 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Nzc5MWYwYmYwNzgxNjQxNzkyMTBkNGU2OGU1ZTU0OWM5MmQxMmJiNjc5NzMyODczNTEwZjQ3YzVhYjJiZWQxOUnX21U=: 00:35:41.181 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:03:Nzc5MWYwYmYwNzgxNjQxNzkyMTBkNGU2OGU1ZTU0OWM5MmQxMmJiNjc5NzMyODczNTEwZjQ3YzVhYjJiZWQxOUnX21U=: 00:35:41.749 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:35:41.749 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:35:41.749 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:35:41.749 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.749 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:41.749 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.749 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:35:41.749 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:35:41.749 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:41.749 13:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:42.009 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:35:42.009 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:35:42.009 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:35:42.009 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:35:42.009 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:35:42.009 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:35:42.009 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:42.009 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.009 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:42.009 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.009 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:42.009 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:42.009 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:42.269 00:35:42.269 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:35:42.269 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:35:42.269 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:35:42.529 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:42.529 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:35:42.529 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.529 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:42.529 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.529 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:35:42.529 { 00:35:42.529 "cntlid": 65, 00:35:42.529 "qid": 0, 00:35:42.529 "state": "enabled", 00:35:42.529 "thread": "nvmf_tgt_poll_group_000", 00:35:42.529 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:35:42.529 "listen_address": { 00:35:42.529 "trtype": "TCP", 00:35:42.529 "adrfam": "IPv4", 00:35:42.529 "traddr": "10.0.0.3", 00:35:42.529 "trsvcid": "4420" 00:35:42.529 }, 00:35:42.529 "peer_address": { 00:35:42.529 "trtype": "TCP", 00:35:42.529 "adrfam": "IPv4", 00:35:42.529 "traddr": "10.0.0.1", 00:35:42.529 "trsvcid": "37606" 00:35:42.529 }, 00:35:42.529 "auth": { 00:35:42.529 "state": "completed", 00:35:42.529 "digest": "sha384", 00:35:42.529 "dhgroup": "ffdhe3072" 00:35:42.529 } 00:35:42.529 } 00:35:42.529 ]' 00:35:42.529 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:35:42.529 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:35:42.529 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:35:42.529 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:35:42.529 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:35:42.789 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:35:42.789 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:35:42.789 13:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:35:42.789 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDRmMDhhYmJjOTY1NzJiYzYwMzZlMTJmOWQ1YzhmYjU2NjEzOGFmOGRlMGEyOTE5gXY1NA==: --dhchap-ctrl-secret DHHC-1:03:MjNiZjU3YjY2NWU1MTQ1NzNkZWJjMTk0MTRhOTQ5ODI2ZDliMzZmODI3NzExYmE2ODFiMzg1MTFjODc2OTIwZb/bPv4=: 00:35:42.789 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:00:NDRmMDhhYmJjOTY1NzJiYzYwMzZlMTJmOWQ1YzhmYjU2NjEzOGFmOGRlMGEyOTE5gXY1NA==: --dhchap-ctrl-secret DHHC-1:03:MjNiZjU3YjY2NWU1MTQ1NzNkZWJjMTk0MTRhOTQ5ODI2ZDliMzZmODI3NzExYmE2ODFiMzg1MTFjODc2OTIwZb/bPv4=: 00:35:43.359 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:35:43.359 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:35:43.359 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:35:43.359 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.359 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:43.359 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.359 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:35:43.359 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:43.359 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:43.619 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:35:43.619 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:35:43.619 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:35:43.619 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:35:43.619 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:35:43.619 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:35:43.619 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:43.619 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.619 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:43.619 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.619 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:43.619 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:43.619 13:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:43.890 00:35:43.890 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:35:43.890 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:35:43.890 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:35:44.167 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:44.167 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:35:44.167 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.167 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:44.167 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.167 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:35:44.167 { 00:35:44.167 "cntlid": 67, 00:35:44.167 "qid": 0, 00:35:44.167 "state": "enabled", 00:35:44.167 "thread": "nvmf_tgt_poll_group_000", 00:35:44.167 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:35:44.167 "listen_address": { 00:35:44.167 "trtype": "TCP", 00:35:44.167 "adrfam": "IPv4", 00:35:44.167 "traddr": "10.0.0.3", 00:35:44.167 "trsvcid": "4420" 00:35:44.167 }, 00:35:44.167 "peer_address": { 00:35:44.167 "trtype": "TCP", 00:35:44.167 "adrfam": "IPv4", 00:35:44.167 "traddr": "10.0.0.1", 00:35:44.167 "trsvcid": "43650" 00:35:44.167 }, 00:35:44.167 "auth": { 00:35:44.167 "state": "completed", 00:35:44.167 "digest": "sha384", 00:35:44.167 "dhgroup": "ffdhe3072" 00:35:44.167 } 00:35:44.167 } 00:35:44.167 ]' 00:35:44.167 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:35:44.167 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:35:44.167 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:35:44.167 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:35:44.167 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:35:44.427 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:35:44.427 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:35:44.427 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:35:44.427 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmFhMjQ2OTI0ZGE4NGFhODZlMTVhMTkyZjlkZWRiNWJI/dgb: --dhchap-ctrl-secret DHHC-1:02:ZDE1MjU0YTE2NTU4YjBjYmU2ZGM2MjNmNDVlZGUyYmIwM2E2ZWU1ODFhMzAwYTM2myPlpA==: 00:35:44.427 13:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:01:YmFhMjQ2OTI0ZGE4NGFhODZlMTVhMTkyZjlkZWRiNWJI/dgb: --dhchap-ctrl-secret DHHC-1:02:ZDE1MjU0YTE2NTU4YjBjYmU2ZGM2MjNmNDVlZGUyYmIwM2E2ZWU1ODFhMzAwYTM2myPlpA==: 00:35:45.005 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:35:45.006 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:35:45.006 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:35:45.006 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.006 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:45.006 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.006 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:35:45.006 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:45.006 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:45.275 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:35:45.275 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:35:45.275 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:35:45.275 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:35:45.275 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:35:45.275 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:35:45.275 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:45.275 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.275 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:45.275 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.275 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:45.275 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:45.275 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:45.535 00:35:45.535 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:35:45.535 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:35:45.535 13:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:35:45.795 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:45.795 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:35:45.795 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.795 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:45.795 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.795 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:35:45.795 { 00:35:45.795 "cntlid": 69, 00:35:45.795 "qid": 0, 00:35:45.795 "state": "enabled", 00:35:45.795 "thread": "nvmf_tgt_poll_group_000", 00:35:45.795 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:35:45.795 "listen_address": { 00:35:45.795 "trtype": "TCP", 00:35:45.795 "adrfam": "IPv4", 00:35:45.795 "traddr": "10.0.0.3", 00:35:45.795 "trsvcid": "4420" 00:35:45.795 }, 00:35:45.795 "peer_address": { 00:35:45.795 "trtype": "TCP", 00:35:45.795 "adrfam": "IPv4", 00:35:45.795 "traddr": "10.0.0.1", 00:35:45.795 "trsvcid": "43682" 00:35:45.795 }, 00:35:45.795 "auth": { 00:35:45.795 "state": "completed", 00:35:45.795 "digest": "sha384", 00:35:45.795 "dhgroup": "ffdhe3072" 00:35:45.795 } 00:35:45.795 } 00:35:45.795 ]' 00:35:45.795 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:35:45.795 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:35:45.795 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:35:46.055 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:35:46.055 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:35:46.055 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:35:46.055 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:35:46.055 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:35:46.055 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmZkYzlhNjM4ZDAzN2FlYjUzYjg0YzEwNDZlODkzNGNjZWRiZjk3ZDZkYWZhOGM3YSt+tw==: --dhchap-ctrl-secret DHHC-1:01:OWEzZTBjOTMzNGE3MmVhZWVkYTA3ZDJlYTU0OTQ2OWNbZz1G: 00:35:46.055 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:02:ZmZkYzlhNjM4ZDAzN2FlYjUzYjg0YzEwNDZlODkzNGNjZWRiZjk3ZDZkYWZhOGM3YSt+tw==: --dhchap-ctrl-secret DHHC-1:01:OWEzZTBjOTMzNGE3MmVhZWVkYTA3ZDJlYTU0OTQ2OWNbZz1G: 00:35:46.624 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:35:46.624 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:35:46.624 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:35:46.624 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.624 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:46.624 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.624 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:35:46.624 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:46.624 13:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:46.884 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:35:46.884 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:35:46.884 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:35:46.884 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:35:46.884 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:35:46.884 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:35:46.884 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key3 00:35:46.884 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.884 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:46.884 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.884 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:35:46.884 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:35:46.884 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:35:47.143 00:35:47.143 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:35:47.143 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:35:47.143 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:35:47.403 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:47.403 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:35:47.403 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.403 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:47.403 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.403 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:35:47.403 { 00:35:47.403 "cntlid": 71, 00:35:47.403 "qid": 0, 00:35:47.403 "state": "enabled", 00:35:47.403 "thread": "nvmf_tgt_poll_group_000", 00:35:47.403 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:35:47.403 "listen_address": { 00:35:47.403 "trtype": "TCP", 00:35:47.403 "adrfam": "IPv4", 00:35:47.403 "traddr": "10.0.0.3", 00:35:47.403 "trsvcid": "4420" 00:35:47.403 }, 00:35:47.403 "peer_address": { 00:35:47.403 "trtype": "TCP", 00:35:47.403 "adrfam": "IPv4", 00:35:47.403 "traddr": "10.0.0.1", 00:35:47.403 "trsvcid": "43708" 00:35:47.403 }, 00:35:47.403 "auth": { 00:35:47.403 "state": "completed", 00:35:47.403 "digest": "sha384", 00:35:47.403 "dhgroup": "ffdhe3072" 00:35:47.403 } 00:35:47.403 } 00:35:47.403 ]' 00:35:47.403 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:35:47.403 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:35:47.403 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:35:47.403 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:35:47.663 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:35:47.663 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:35:47.663 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:35:47.663 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:35:47.663 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Nzc5MWYwYmYwNzgxNjQxNzkyMTBkNGU2OGU1ZTU0OWM5MmQxMmJiNjc5NzMyODczNTEwZjQ3YzVhYjJiZWQxOUnX21U=: 00:35:47.663 13:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:03:Nzc5MWYwYmYwNzgxNjQxNzkyMTBkNGU2OGU1ZTU0OWM5MmQxMmJiNjc5NzMyODczNTEwZjQ3YzVhYjJiZWQxOUnX21U=: 00:35:48.231 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:35:48.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:35:48.231 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:35:48.231 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.231 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:48.491 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.491 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:35:48.491 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:35:48.491 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:48.491 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:48.491 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:35:48.491 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:35:48.491 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:35:48.491 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:35:48.491 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:35:48.491 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:35:48.491 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:48.491 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.491 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:48.491 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.491 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:48.491 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:48.491 13:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:49.061 00:35:49.061 13:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:35:49.061 13:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:35:49.061 13:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:35:49.061 13:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:49.061 13:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:35:49.061 13:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.061 13:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:49.061 13:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.061 13:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:35:49.061 { 00:35:49.061 "cntlid": 73, 00:35:49.061 "qid": 0, 00:35:49.061 "state": "enabled", 00:35:49.061 "thread": "nvmf_tgt_poll_group_000", 00:35:49.061 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:35:49.061 "listen_address": { 00:35:49.061 "trtype": "TCP", 00:35:49.061 "adrfam": "IPv4", 00:35:49.061 "traddr": "10.0.0.3", 00:35:49.061 "trsvcid": "4420" 00:35:49.061 }, 00:35:49.061 "peer_address": { 00:35:49.061 "trtype": "TCP", 00:35:49.061 "adrfam": "IPv4", 00:35:49.061 "traddr": "10.0.0.1", 00:35:49.061 "trsvcid": "43726" 00:35:49.061 }, 00:35:49.061 "auth": { 00:35:49.061 "state": "completed", 00:35:49.061 "digest": "sha384", 00:35:49.061 "dhgroup": "ffdhe4096" 00:35:49.061 } 00:35:49.061 } 00:35:49.061 ]' 00:35:49.061 13:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:35:49.322 13:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:35:49.322 13:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:35:49.322 13:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:35:49.322 13:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:35:49.322 13:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:35:49.322 13:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:35:49.322 13:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:35:49.582 13:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDRmMDhhYmJjOTY1NzJiYzYwMzZlMTJmOWQ1YzhmYjU2NjEzOGFmOGRlMGEyOTE5gXY1NA==: --dhchap-ctrl-secret DHHC-1:03:MjNiZjU3YjY2NWU1MTQ1NzNkZWJjMTk0MTRhOTQ5ODI2ZDliMzZmODI3NzExYmE2ODFiMzg1MTFjODc2OTIwZb/bPv4=: 00:35:49.582 13:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:00:NDRmMDhhYmJjOTY1NzJiYzYwMzZlMTJmOWQ1YzhmYjU2NjEzOGFmOGRlMGEyOTE5gXY1NA==: --dhchap-ctrl-secret DHHC-1:03:MjNiZjU3YjY2NWU1MTQ1NzNkZWJjMTk0MTRhOTQ5ODI2ZDliMzZmODI3NzExYmE2ODFiMzg1MTFjODc2OTIwZb/bPv4=: 00:35:50.151 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:35:50.151 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:35:50.151 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:35:50.151 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.151 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:50.151 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.151 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:35:50.151 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:50.151 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:50.410 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:35:50.410 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:35:50.410 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:35:50.410 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:35:50.410 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:35:50.410 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:35:50.410 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:50.410 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.410 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:50.410 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.410 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:50.410 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:50.410 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:50.671 00:35:50.671 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:35:50.671 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:35:50.671 13:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:35:50.931 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:50.931 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:35:50.931 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.931 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:50.931 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.931 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:35:50.931 { 00:35:50.931 "cntlid": 75, 00:35:50.931 "qid": 0, 00:35:50.931 "state": "enabled", 00:35:50.931 "thread": "nvmf_tgt_poll_group_000", 00:35:50.931 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:35:50.931 "listen_address": { 00:35:50.931 "trtype": "TCP", 00:35:50.931 "adrfam": "IPv4", 00:35:50.931 "traddr": "10.0.0.3", 00:35:50.931 "trsvcid": "4420" 00:35:50.931 }, 00:35:50.931 "peer_address": { 00:35:50.931 "trtype": "TCP", 00:35:50.931 "adrfam": "IPv4", 00:35:50.931 "traddr": "10.0.0.1", 00:35:50.931 "trsvcid": "43756" 00:35:50.931 }, 00:35:50.931 "auth": { 00:35:50.931 "state": "completed", 00:35:50.931 "digest": "sha384", 00:35:50.931 "dhgroup": "ffdhe4096" 00:35:50.931 } 00:35:50.931 } 00:35:50.931 ]' 00:35:50.931 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:35:50.931 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:35:50.931 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:35:50.931 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:35:50.931 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:35:50.931 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:35:50.931 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:35:50.931 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:35:51.190 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmFhMjQ2OTI0ZGE4NGFhODZlMTVhMTkyZjlkZWRiNWJI/dgb: --dhchap-ctrl-secret DHHC-1:02:ZDE1MjU0YTE2NTU4YjBjYmU2ZGM2MjNmNDVlZGUyYmIwM2E2ZWU1ODFhMzAwYTM2myPlpA==: 00:35:51.190 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:01:YmFhMjQ2OTI0ZGE4NGFhODZlMTVhMTkyZjlkZWRiNWJI/dgb: --dhchap-ctrl-secret DHHC-1:02:ZDE1MjU0YTE2NTU4YjBjYmU2ZGM2MjNmNDVlZGUyYmIwM2E2ZWU1ODFhMzAwYTM2myPlpA==: 00:35:51.759 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:35:51.759 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:35:51.759 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:35:51.759 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.759 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:51.759 13:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.759 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:35:51.759 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:51.759 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:52.019 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:35:52.019 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:35:52.019 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:35:52.019 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:35:52.019 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:35:52.019 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:35:52.020 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:52.020 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.020 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:52.020 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.020 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:52.020 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:52.020 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:52.278 00:35:52.278 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:35:52.278 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:35:52.278 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:35:52.537 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:52.537 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:35:52.537 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.537 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:52.537 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.537 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:35:52.537 { 00:35:52.537 "cntlid": 77, 00:35:52.537 "qid": 0, 00:35:52.537 "state": "enabled", 00:35:52.537 "thread": "nvmf_tgt_poll_group_000", 00:35:52.537 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:35:52.537 "listen_address": { 00:35:52.537 "trtype": "TCP", 00:35:52.537 "adrfam": "IPv4", 00:35:52.537 "traddr": "10.0.0.3", 00:35:52.537 "trsvcid": "4420" 00:35:52.537 }, 00:35:52.537 "peer_address": { 00:35:52.537 "trtype": "TCP", 00:35:52.537 "adrfam": "IPv4", 00:35:52.537 "traddr": "10.0.0.1", 00:35:52.537 "trsvcid": "43794" 00:35:52.537 }, 00:35:52.537 "auth": { 00:35:52.537 "state": "completed", 00:35:52.537 "digest": "sha384", 00:35:52.537 "dhgroup": "ffdhe4096" 00:35:52.537 } 00:35:52.537 } 00:35:52.537 ]' 00:35:52.537 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:35:52.537 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:35:52.537 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:35:52.537 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:35:52.537 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:35:52.797 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:35:52.797 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:35:52.797 13:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:35:52.797 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmZkYzlhNjM4ZDAzN2FlYjUzYjg0YzEwNDZlODkzNGNjZWRiZjk3ZDZkYWZhOGM3YSt+tw==: --dhchap-ctrl-secret DHHC-1:01:OWEzZTBjOTMzNGE3MmVhZWVkYTA3ZDJlYTU0OTQ2OWNbZz1G: 00:35:52.797 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:02:ZmZkYzlhNjM4ZDAzN2FlYjUzYjg0YzEwNDZlODkzNGNjZWRiZjk3ZDZkYWZhOGM3YSt+tw==: --dhchap-ctrl-secret DHHC-1:01:OWEzZTBjOTMzNGE3MmVhZWVkYTA3ZDJlYTU0OTQ2OWNbZz1G: 00:35:53.366 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:35:53.366 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:35:53.366 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:35:53.366 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.366 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:53.366 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.366 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:35:53.366 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:53.366 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:53.625 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:35:53.625 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:35:53.625 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:35:53.625 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:35:53.625 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:35:53.625 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:35:53.625 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key3 00:35:53.625 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.625 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:53.625 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.625 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:35:53.625 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:35:53.625 13:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:35:53.942 00:35:53.942 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:35:53.942 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:35:53.942 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:35:54.235 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:54.235 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:35:54.235 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.235 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:54.235 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.235 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:35:54.235 { 00:35:54.235 "cntlid": 79, 00:35:54.235 "qid": 0, 00:35:54.235 "state": "enabled", 00:35:54.235 "thread": "nvmf_tgt_poll_group_000", 00:35:54.235 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:35:54.235 "listen_address": { 00:35:54.235 "trtype": "TCP", 00:35:54.235 "adrfam": "IPv4", 00:35:54.235 "traddr": "10.0.0.3", 00:35:54.235 "trsvcid": "4420" 00:35:54.235 }, 00:35:54.235 "peer_address": { 00:35:54.235 "trtype": "TCP", 00:35:54.235 "adrfam": "IPv4", 00:35:54.235 "traddr": "10.0.0.1", 00:35:54.235 "trsvcid": "34284" 00:35:54.235 }, 00:35:54.235 "auth": { 00:35:54.235 "state": "completed", 00:35:54.235 "digest": "sha384", 00:35:54.235 "dhgroup": "ffdhe4096" 00:35:54.235 } 00:35:54.235 } 00:35:54.235 ]' 00:35:54.235 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:35:54.235 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:35:54.235 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:35:54.235 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:35:54.235 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:35:54.235 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:35:54.235 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:35:54.235 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:35:54.493 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Nzc5MWYwYmYwNzgxNjQxNzkyMTBkNGU2OGU1ZTU0OWM5MmQxMmJiNjc5NzMyODczNTEwZjQ3YzVhYjJiZWQxOUnX21U=: 00:35:54.494 13:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:03:Nzc5MWYwYmYwNzgxNjQxNzkyMTBkNGU2OGU1ZTU0OWM5MmQxMmJiNjc5NzMyODczNTEwZjQ3YzVhYjJiZWQxOUnX21U=: 00:35:55.063 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:35:55.063 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:35:55.063 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:35:55.063 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.063 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:55.063 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.063 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:35:55.063 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:35:55.063 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:55.063 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:55.323 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:35:55.323 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:35:55.323 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:35:55.323 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:35:55.323 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:35:55.323 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:35:55.323 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:55.323 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.323 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:55.323 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.323 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:55.323 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:55.323 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:55.583 00:35:55.843 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:35:55.843 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:35:55.843 13:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:35:55.843 13:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:55.843 13:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:35:55.843 13:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.843 13:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:55.843 13:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.843 13:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:35:55.843 { 00:35:55.843 "cntlid": 81, 00:35:55.843 "qid": 0, 00:35:55.843 "state": "enabled", 00:35:55.843 "thread": "nvmf_tgt_poll_group_000", 00:35:55.843 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:35:55.843 "listen_address": { 00:35:55.843 "trtype": "TCP", 00:35:55.843 "adrfam": "IPv4", 00:35:55.843 "traddr": "10.0.0.3", 00:35:55.843 "trsvcid": "4420" 00:35:55.843 }, 00:35:55.843 "peer_address": { 00:35:55.843 "trtype": "TCP", 00:35:55.843 "adrfam": "IPv4", 00:35:55.843 "traddr": "10.0.0.1", 00:35:55.843 "trsvcid": "34300" 00:35:55.843 }, 00:35:55.843 "auth": { 00:35:55.843 "state": "completed", 00:35:55.843 "digest": "sha384", 00:35:55.843 "dhgroup": "ffdhe6144" 00:35:55.843 } 00:35:55.843 } 00:35:55.843 ]' 00:35:55.843 13:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:35:56.103 13:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:35:56.103 13:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:35:56.103 13:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:35:56.103 13:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:35:56.103 13:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:35:56.103 13:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:35:56.103 13:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:35:56.362 13:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDRmMDhhYmJjOTY1NzJiYzYwMzZlMTJmOWQ1YzhmYjU2NjEzOGFmOGRlMGEyOTE5gXY1NA==: --dhchap-ctrl-secret DHHC-1:03:MjNiZjU3YjY2NWU1MTQ1NzNkZWJjMTk0MTRhOTQ5ODI2ZDliMzZmODI3NzExYmE2ODFiMzg1MTFjODc2OTIwZb/bPv4=: 00:35:56.362 13:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:00:NDRmMDhhYmJjOTY1NzJiYzYwMzZlMTJmOWQ1YzhmYjU2NjEzOGFmOGRlMGEyOTE5gXY1NA==: --dhchap-ctrl-secret DHHC-1:03:MjNiZjU3YjY2NWU1MTQ1NzNkZWJjMTk0MTRhOTQ5ODI2ZDliMzZmODI3NzExYmE2ODFiMzg1MTFjODc2OTIwZb/bPv4=: 00:35:56.930 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:35:56.930 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:35:56.930 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:35:56.930 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.930 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:56.930 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.930 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:35:56.930 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:56.930 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:57.189 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:35:57.189 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:35:57.189 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:35:57.189 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:35:57.189 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:35:57.189 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:35:57.189 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:57.189 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.189 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:57.189 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.189 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:57.189 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:57.189 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:57.450 00:35:57.450 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:35:57.450 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:35:57.450 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:35:57.710 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:57.710 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:35:57.710 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.710 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:57.710 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.710 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:35:57.710 { 00:35:57.710 "cntlid": 83, 00:35:57.710 "qid": 0, 00:35:57.710 "state": "enabled", 00:35:57.710 "thread": "nvmf_tgt_poll_group_000", 00:35:57.710 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:35:57.710 "listen_address": { 00:35:57.710 "trtype": "TCP", 00:35:57.710 "adrfam": "IPv4", 00:35:57.710 "traddr": "10.0.0.3", 00:35:57.710 "trsvcid": "4420" 00:35:57.710 }, 00:35:57.710 "peer_address": { 00:35:57.710 "trtype": "TCP", 00:35:57.710 "adrfam": "IPv4", 00:35:57.710 "traddr": "10.0.0.1", 00:35:57.710 "trsvcid": "34336" 00:35:57.710 }, 00:35:57.710 "auth": { 00:35:57.710 "state": "completed", 00:35:57.710 "digest": "sha384", 00:35:57.710 "dhgroup": "ffdhe6144" 00:35:57.710 } 00:35:57.710 } 00:35:57.710 ]' 00:35:57.710 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:35:57.710 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:35:57.710 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:35:57.710 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:35:57.710 13:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:35:57.710 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:35:57.710 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:35:57.710 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:35:57.968 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmFhMjQ2OTI0ZGE4NGFhODZlMTVhMTkyZjlkZWRiNWJI/dgb: --dhchap-ctrl-secret DHHC-1:02:ZDE1MjU0YTE2NTU4YjBjYmU2ZGM2MjNmNDVlZGUyYmIwM2E2ZWU1ODFhMzAwYTM2myPlpA==: 00:35:57.968 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:01:YmFhMjQ2OTI0ZGE4NGFhODZlMTVhMTkyZjlkZWRiNWJI/dgb: --dhchap-ctrl-secret DHHC-1:02:ZDE1MjU0YTE2NTU4YjBjYmU2ZGM2MjNmNDVlZGUyYmIwM2E2ZWU1ODFhMzAwYTM2myPlpA==: 00:35:58.535 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:35:58.535 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:35:58.536 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:35:58.536 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.536 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:58.536 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.536 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:35:58.536 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:58.536 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:58.795 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:35:58.795 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:35:58.795 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:35:58.795 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:35:58.795 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:35:58.795 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:35:58.795 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:58.795 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.795 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:58.795 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.795 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:58.795 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:58.795 13:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:59.053 00:35:59.053 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:35:59.053 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:35:59.053 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:35:59.311 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:59.311 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:35:59.311 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.311 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:35:59.311 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.311 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:35:59.311 { 00:35:59.311 "cntlid": 85, 00:35:59.311 "qid": 0, 00:35:59.311 "state": "enabled", 00:35:59.311 "thread": "nvmf_tgt_poll_group_000", 00:35:59.311 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:35:59.311 "listen_address": { 00:35:59.311 "trtype": "TCP", 00:35:59.311 "adrfam": "IPv4", 00:35:59.311 "traddr": "10.0.0.3", 00:35:59.311 "trsvcid": "4420" 00:35:59.311 }, 00:35:59.311 "peer_address": { 00:35:59.311 "trtype": "TCP", 00:35:59.311 "adrfam": "IPv4", 00:35:59.311 "traddr": "10.0.0.1", 00:35:59.311 "trsvcid": "34360" 00:35:59.311 }, 00:35:59.311 "auth": { 00:35:59.311 "state": "completed", 00:35:59.311 "digest": "sha384", 00:35:59.311 "dhgroup": "ffdhe6144" 00:35:59.311 } 00:35:59.311 } 00:35:59.312 ]' 00:35:59.312 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:35:59.312 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:35:59.571 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:35:59.571 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:35:59.571 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:35:59.571 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:35:59.571 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:35:59.571 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:35:59.831 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmZkYzlhNjM4ZDAzN2FlYjUzYjg0YzEwNDZlODkzNGNjZWRiZjk3ZDZkYWZhOGM3YSt+tw==: --dhchap-ctrl-secret DHHC-1:01:OWEzZTBjOTMzNGE3MmVhZWVkYTA3ZDJlYTU0OTQ2OWNbZz1G: 00:35:59.831 13:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:02:ZmZkYzlhNjM4ZDAzN2FlYjUzYjg0YzEwNDZlODkzNGNjZWRiZjk3ZDZkYWZhOGM3YSt+tw==: --dhchap-ctrl-secret DHHC-1:01:OWEzZTBjOTMzNGE3MmVhZWVkYTA3ZDJlYTU0OTQ2OWNbZz1G: 00:36:00.401 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:36:00.401 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:36:00.401 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:36:00.401 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.401 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:00.401 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.401 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:36:00.401 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:00.401 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:00.401 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:36:00.401 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:36:00.401 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:36:00.401 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:36:00.401 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:36:00.401 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:36:00.402 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key3 00:36:00.402 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.402 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:00.402 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.402 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:36:00.402 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:36:00.402 13:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:36:00.971 00:36:00.971 13:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:36:00.971 13:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:36:00.971 13:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:36:00.971 13:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:00.971 13:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:36:00.971 13:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.971 13:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:00.971 13:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.230 13:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:36:01.230 { 00:36:01.230 "cntlid": 87, 00:36:01.230 "qid": 0, 00:36:01.230 "state": "enabled", 00:36:01.230 "thread": "nvmf_tgt_poll_group_000", 00:36:01.230 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:36:01.230 "listen_address": { 00:36:01.230 "trtype": "TCP", 00:36:01.230 "adrfam": "IPv4", 00:36:01.230 "traddr": "10.0.0.3", 00:36:01.230 "trsvcid": "4420" 00:36:01.230 }, 00:36:01.230 "peer_address": { 00:36:01.230 "trtype": "TCP", 00:36:01.230 "adrfam": "IPv4", 00:36:01.230 "traddr": "10.0.0.1", 00:36:01.230 "trsvcid": "34382" 00:36:01.230 }, 00:36:01.230 "auth": { 00:36:01.230 "state": "completed", 00:36:01.230 "digest": "sha384", 00:36:01.230 "dhgroup": "ffdhe6144" 00:36:01.230 } 00:36:01.230 } 00:36:01.230 ]' 00:36:01.230 13:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:36:01.230 13:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:36:01.230 13:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:36:01.230 13:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:36:01.230 13:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:36:01.230 13:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:36:01.230 13:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:36:01.230 13:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:36:01.490 13:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Nzc5MWYwYmYwNzgxNjQxNzkyMTBkNGU2OGU1ZTU0OWM5MmQxMmJiNjc5NzMyODczNTEwZjQ3YzVhYjJiZWQxOUnX21U=: 00:36:01.490 13:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:03:Nzc5MWYwYmYwNzgxNjQxNzkyMTBkNGU2OGU1ZTU0OWM5MmQxMmJiNjc5NzMyODczNTEwZjQ3YzVhYjJiZWQxOUnX21U=: 00:36:02.064 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:36:02.064 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:36:02.064 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:36:02.064 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.064 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:02.064 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.064 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:36:02.064 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:36:02.064 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:02.064 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:02.064 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:36:02.064 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:36:02.064 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:36:02.064 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:36:02.064 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:36:02.064 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:36:02.064 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:02.064 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.064 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:02.325 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.325 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:02.325 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:02.325 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:02.584 00:36:02.584 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:36:02.584 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:36:02.584 13:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:36:02.845 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:02.845 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:36:02.845 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.845 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:02.845 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.845 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:36:02.845 { 00:36:02.845 "cntlid": 89, 00:36:02.845 "qid": 0, 00:36:02.845 "state": "enabled", 00:36:02.845 "thread": "nvmf_tgt_poll_group_000", 00:36:02.845 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:36:02.845 "listen_address": { 00:36:02.845 "trtype": "TCP", 00:36:02.845 "adrfam": "IPv4", 00:36:02.845 "traddr": "10.0.0.3", 00:36:02.845 "trsvcid": "4420" 00:36:02.845 }, 00:36:02.845 "peer_address": { 00:36:02.845 "trtype": "TCP", 00:36:02.845 "adrfam": "IPv4", 00:36:02.845 "traddr": "10.0.0.1", 00:36:02.845 "trsvcid": "34910" 00:36:02.845 }, 00:36:02.845 "auth": { 00:36:02.845 "state": "completed", 00:36:02.845 "digest": "sha384", 00:36:02.845 "dhgroup": "ffdhe8192" 00:36:02.845 } 00:36:02.845 } 00:36:02.845 ]' 00:36:02.845 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:36:03.105 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:36:03.105 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:36:03.105 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:36:03.105 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:36:03.105 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:36:03.105 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:36:03.105 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:36:03.367 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDRmMDhhYmJjOTY1NzJiYzYwMzZlMTJmOWQ1YzhmYjU2NjEzOGFmOGRlMGEyOTE5gXY1NA==: --dhchap-ctrl-secret DHHC-1:03:MjNiZjU3YjY2NWU1MTQ1NzNkZWJjMTk0MTRhOTQ5ODI2ZDliMzZmODI3NzExYmE2ODFiMzg1MTFjODc2OTIwZb/bPv4=: 00:36:03.367 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:00:NDRmMDhhYmJjOTY1NzJiYzYwMzZlMTJmOWQ1YzhmYjU2NjEzOGFmOGRlMGEyOTE5gXY1NA==: --dhchap-ctrl-secret DHHC-1:03:MjNiZjU3YjY2NWU1MTQ1NzNkZWJjMTk0MTRhOTQ5ODI2ZDliMzZmODI3NzExYmE2ODFiMzg1MTFjODc2OTIwZb/bPv4=: 00:36:03.938 13:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:36:03.938 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:36:03.938 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:36:03.938 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.938 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:03.938 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.938 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:36:03.938 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:03.938 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:03.938 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:36:03.938 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:36:03.938 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:36:03.938 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:36:03.938 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:36:03.938 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:36:03.938 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:03.938 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.938 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:03.938 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.938 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:03.938 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:03.938 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:04.508 00:36:04.508 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:36:04.508 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:36:04.508 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:36:04.768 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:04.768 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:36:04.768 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.768 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:04.768 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.768 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:36:04.768 { 00:36:04.768 "cntlid": 91, 00:36:04.768 "qid": 0, 00:36:04.768 "state": "enabled", 00:36:04.768 "thread": "nvmf_tgt_poll_group_000", 00:36:04.768 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:36:04.768 "listen_address": { 00:36:04.768 "trtype": "TCP", 00:36:04.768 "adrfam": "IPv4", 00:36:04.768 "traddr": "10.0.0.3", 00:36:04.768 "trsvcid": "4420" 00:36:04.768 }, 00:36:04.768 "peer_address": { 00:36:04.768 "trtype": "TCP", 00:36:04.768 "adrfam": "IPv4", 00:36:04.768 "traddr": "10.0.0.1", 00:36:04.768 "trsvcid": "34940" 00:36:04.768 }, 00:36:04.768 "auth": { 00:36:04.768 "state": "completed", 00:36:04.768 "digest": "sha384", 00:36:04.768 "dhgroup": "ffdhe8192" 00:36:04.768 } 00:36:04.768 } 00:36:04.768 ]' 00:36:04.768 13:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:36:04.768 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:36:04.768 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:36:04.768 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:36:04.768 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:36:05.027 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:36:05.027 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:36:05.027 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:36:05.027 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmFhMjQ2OTI0ZGE4NGFhODZlMTVhMTkyZjlkZWRiNWJI/dgb: --dhchap-ctrl-secret DHHC-1:02:ZDE1MjU0YTE2NTU4YjBjYmU2ZGM2MjNmNDVlZGUyYmIwM2E2ZWU1ODFhMzAwYTM2myPlpA==: 00:36:05.028 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:01:YmFhMjQ2OTI0ZGE4NGFhODZlMTVhMTkyZjlkZWRiNWJI/dgb: --dhchap-ctrl-secret DHHC-1:02:ZDE1MjU0YTE2NTU4YjBjYmU2ZGM2MjNmNDVlZGUyYmIwM2E2ZWU1ODFhMzAwYTM2myPlpA==: 00:36:05.597 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:36:05.597 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:36:05.597 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:36:05.597 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.597 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:05.597 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.597 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:36:05.597 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:05.597 13:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:05.857 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:36:05.857 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:36:05.857 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:36:05.857 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:36:05.857 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:36:05.857 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:36:05.857 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:05.857 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.857 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:05.857 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.857 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:05.857 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:05.858 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:06.428 00:36:06.428 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:36:06.428 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:36:06.428 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:36:06.688 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:06.688 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:36:06.688 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.688 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:06.688 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.688 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:36:06.688 { 00:36:06.688 "cntlid": 93, 00:36:06.688 "qid": 0, 00:36:06.688 "state": "enabled", 00:36:06.688 "thread": "nvmf_tgt_poll_group_000", 00:36:06.688 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:36:06.688 "listen_address": { 00:36:06.688 "trtype": "TCP", 00:36:06.688 "adrfam": "IPv4", 00:36:06.688 "traddr": "10.0.0.3", 00:36:06.688 "trsvcid": "4420" 00:36:06.688 }, 00:36:06.688 "peer_address": { 00:36:06.688 "trtype": "TCP", 00:36:06.688 "adrfam": "IPv4", 00:36:06.688 "traddr": "10.0.0.1", 00:36:06.688 "trsvcid": "34958" 00:36:06.688 }, 00:36:06.688 "auth": { 00:36:06.688 "state": "completed", 00:36:06.688 "digest": "sha384", 00:36:06.688 "dhgroup": "ffdhe8192" 00:36:06.688 } 00:36:06.688 } 00:36:06.688 ]' 00:36:06.688 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:36:06.688 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:36:06.688 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:36:06.688 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:36:06.688 13:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:36:06.949 13:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:36:06.949 13:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:36:06.949 13:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:36:06.949 13:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmZkYzlhNjM4ZDAzN2FlYjUzYjg0YzEwNDZlODkzNGNjZWRiZjk3ZDZkYWZhOGM3YSt+tw==: --dhchap-ctrl-secret DHHC-1:01:OWEzZTBjOTMzNGE3MmVhZWVkYTA3ZDJlYTU0OTQ2OWNbZz1G: 00:36:06.949 13:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:02:ZmZkYzlhNjM4ZDAzN2FlYjUzYjg0YzEwNDZlODkzNGNjZWRiZjk3ZDZkYWZhOGM3YSt+tw==: --dhchap-ctrl-secret DHHC-1:01:OWEzZTBjOTMzNGE3MmVhZWVkYTA3ZDJlYTU0OTQ2OWNbZz1G: 00:36:07.518 13:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:36:07.518 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:36:07.518 13:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:36:07.518 13:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.518 13:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:07.518 13:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.518 13:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:36:07.518 13:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:07.518 13:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:07.778 13:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:36:07.778 13:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:36:07.778 13:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:36:07.778 13:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:36:07.778 13:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:36:07.778 13:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:36:07.778 13:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key3 00:36:07.778 13:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.778 13:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:07.778 13:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.778 13:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:36:07.778 13:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:36:07.778 13:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:36:08.348 00:36:08.348 13:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:36:08.348 13:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:36:08.348 13:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:36:08.608 13:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:08.608 13:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:36:08.608 13:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.608 13:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:08.608 13:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.608 13:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:36:08.608 { 00:36:08.608 "cntlid": 95, 00:36:08.608 "qid": 0, 00:36:08.608 "state": "enabled", 00:36:08.608 "thread": "nvmf_tgt_poll_group_000", 00:36:08.608 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:36:08.608 "listen_address": { 00:36:08.608 "trtype": "TCP", 00:36:08.608 "adrfam": "IPv4", 00:36:08.608 "traddr": "10.0.0.3", 00:36:08.608 "trsvcid": "4420" 00:36:08.608 }, 00:36:08.608 "peer_address": { 00:36:08.608 "trtype": "TCP", 00:36:08.608 "adrfam": "IPv4", 00:36:08.608 "traddr": "10.0.0.1", 00:36:08.608 "trsvcid": "34998" 00:36:08.608 }, 00:36:08.608 "auth": { 00:36:08.608 "state": "completed", 00:36:08.608 "digest": "sha384", 00:36:08.608 "dhgroup": "ffdhe8192" 00:36:08.608 } 00:36:08.608 } 00:36:08.608 ]' 00:36:08.608 13:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:36:08.608 13:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:36:08.608 13:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:36:08.608 13:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:36:08.608 13:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:36:08.868 13:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:36:08.868 13:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:36:08.868 13:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:36:08.868 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Nzc5MWYwYmYwNzgxNjQxNzkyMTBkNGU2OGU1ZTU0OWM5MmQxMmJiNjc5NzMyODczNTEwZjQ3YzVhYjJiZWQxOUnX21U=: 00:36:08.868 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:03:Nzc5MWYwYmYwNzgxNjQxNzkyMTBkNGU2OGU1ZTU0OWM5MmQxMmJiNjc5NzMyODczNTEwZjQ3YzVhYjJiZWQxOUnX21U=: 00:36:09.437 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:36:09.697 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:36:09.697 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:36:09.697 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.697 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:09.697 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.697 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:36:09.697 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:36:09.697 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:36:09.697 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:36:09.697 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:36:09.697 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:36:09.697 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:36:09.697 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:36:09.697 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:36:09.697 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:36:09.697 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:36:09.697 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:09.697 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.697 13:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:09.697 13:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.697 13:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:09.697 13:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:09.697 13:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:09.958 00:36:10.218 13:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:36:10.218 13:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:36:10.218 13:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:36:10.218 13:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:10.218 13:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:36:10.218 13:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.218 13:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:10.218 13:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.218 13:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:36:10.218 { 00:36:10.218 "cntlid": 97, 00:36:10.218 "qid": 0, 00:36:10.218 "state": "enabled", 00:36:10.218 "thread": "nvmf_tgt_poll_group_000", 00:36:10.218 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:36:10.218 "listen_address": { 00:36:10.218 "trtype": "TCP", 00:36:10.218 "adrfam": "IPv4", 00:36:10.218 "traddr": "10.0.0.3", 00:36:10.218 "trsvcid": "4420" 00:36:10.218 }, 00:36:10.218 "peer_address": { 00:36:10.218 "trtype": "TCP", 00:36:10.218 "adrfam": "IPv4", 00:36:10.218 "traddr": "10.0.0.1", 00:36:10.218 "trsvcid": "35020" 00:36:10.218 }, 00:36:10.218 "auth": { 00:36:10.218 "state": "completed", 00:36:10.218 "digest": "sha512", 00:36:10.218 "dhgroup": "null" 00:36:10.218 } 00:36:10.218 } 00:36:10.218 ]' 00:36:10.218 13:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:36:10.477 13:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:36:10.477 13:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:36:10.477 13:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:36:10.477 13:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:36:10.477 13:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:36:10.477 13:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:36:10.477 13:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:36:10.737 13:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDRmMDhhYmJjOTY1NzJiYzYwMzZlMTJmOWQ1YzhmYjU2NjEzOGFmOGRlMGEyOTE5gXY1NA==: --dhchap-ctrl-secret DHHC-1:03:MjNiZjU3YjY2NWU1MTQ1NzNkZWJjMTk0MTRhOTQ5ODI2ZDliMzZmODI3NzExYmE2ODFiMzg1MTFjODc2OTIwZb/bPv4=: 00:36:10.737 13:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:00:NDRmMDhhYmJjOTY1NzJiYzYwMzZlMTJmOWQ1YzhmYjU2NjEzOGFmOGRlMGEyOTE5gXY1NA==: --dhchap-ctrl-secret DHHC-1:03:MjNiZjU3YjY2NWU1MTQ1NzNkZWJjMTk0MTRhOTQ5ODI2ZDliMzZmODI3NzExYmE2ODFiMzg1MTFjODc2OTIwZb/bPv4=: 00:36:11.306 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:36:11.306 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:36:11.306 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:36:11.306 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.306 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:11.306 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.306 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:36:11.306 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:36:11.306 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:36:11.566 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:36:11.566 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:36:11.566 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:36:11.566 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:36:11.566 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:36:11.566 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:36:11.566 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:11.566 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.566 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:11.566 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.566 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:11.566 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:11.566 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:11.826 00:36:11.826 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:36:11.826 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:36:11.826 13:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:36:12.085 13:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:12.085 13:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:36:12.085 13:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.085 13:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:12.085 13:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.085 13:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:36:12.085 { 00:36:12.085 "cntlid": 99, 00:36:12.085 "qid": 0, 00:36:12.085 "state": "enabled", 00:36:12.085 "thread": "nvmf_tgt_poll_group_000", 00:36:12.085 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:36:12.085 "listen_address": { 00:36:12.085 "trtype": "TCP", 00:36:12.085 "adrfam": "IPv4", 00:36:12.085 "traddr": "10.0.0.3", 00:36:12.085 "trsvcid": "4420" 00:36:12.085 }, 00:36:12.085 "peer_address": { 00:36:12.086 "trtype": "TCP", 00:36:12.086 "adrfam": "IPv4", 00:36:12.086 "traddr": "10.0.0.1", 00:36:12.086 "trsvcid": "35056" 00:36:12.086 }, 00:36:12.086 "auth": { 00:36:12.086 "state": "completed", 00:36:12.086 "digest": "sha512", 00:36:12.086 "dhgroup": "null" 00:36:12.086 } 00:36:12.086 } 00:36:12.086 ]' 00:36:12.086 13:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:36:12.086 13:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:36:12.086 13:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:36:12.086 13:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:36:12.086 13:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:36:12.086 13:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:36:12.086 13:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:36:12.086 13:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:36:12.346 13:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmFhMjQ2OTI0ZGE4NGFhODZlMTVhMTkyZjlkZWRiNWJI/dgb: --dhchap-ctrl-secret DHHC-1:02:ZDE1MjU0YTE2NTU4YjBjYmU2ZGM2MjNmNDVlZGUyYmIwM2E2ZWU1ODFhMzAwYTM2myPlpA==: 00:36:12.346 13:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:01:YmFhMjQ2OTI0ZGE4NGFhODZlMTVhMTkyZjlkZWRiNWJI/dgb: --dhchap-ctrl-secret DHHC-1:02:ZDE1MjU0YTE2NTU4YjBjYmU2ZGM2MjNmNDVlZGUyYmIwM2E2ZWU1ODFhMzAwYTM2myPlpA==: 00:36:12.917 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:36:12.917 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:36:12.917 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:36:12.917 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.917 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:12.917 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.917 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:36:12.917 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:36:12.917 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:36:13.177 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:36:13.177 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:36:13.177 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:36:13.177 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:36:13.177 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:36:13.177 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:36:13.178 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:13.178 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.178 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:13.178 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.178 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:13.178 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:13.178 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:13.438 00:36:13.438 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:36:13.438 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:36:13.438 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:36:13.699 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:13.699 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:36:13.699 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.699 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:13.699 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.699 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:36:13.699 { 00:36:13.699 "cntlid": 101, 00:36:13.699 "qid": 0, 00:36:13.699 "state": "enabled", 00:36:13.699 "thread": "nvmf_tgt_poll_group_000", 00:36:13.699 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:36:13.699 "listen_address": { 00:36:13.699 "trtype": "TCP", 00:36:13.699 "adrfam": "IPv4", 00:36:13.699 "traddr": "10.0.0.3", 00:36:13.699 "trsvcid": "4420" 00:36:13.699 }, 00:36:13.699 "peer_address": { 00:36:13.699 "trtype": "TCP", 00:36:13.699 "adrfam": "IPv4", 00:36:13.699 "traddr": "10.0.0.1", 00:36:13.699 "trsvcid": "36800" 00:36:13.699 }, 00:36:13.699 "auth": { 00:36:13.699 "state": "completed", 00:36:13.699 "digest": "sha512", 00:36:13.699 "dhgroup": "null" 00:36:13.699 } 00:36:13.699 } 00:36:13.699 ]' 00:36:13.699 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:36:13.699 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:36:13.699 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:36:13.699 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:36:13.699 13:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:36:13.699 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:36:13.699 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:36:13.699 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:36:13.959 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmZkYzlhNjM4ZDAzN2FlYjUzYjg0YzEwNDZlODkzNGNjZWRiZjk3ZDZkYWZhOGM3YSt+tw==: --dhchap-ctrl-secret DHHC-1:01:OWEzZTBjOTMzNGE3MmVhZWVkYTA3ZDJlYTU0OTQ2OWNbZz1G: 00:36:13.959 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:02:ZmZkYzlhNjM4ZDAzN2FlYjUzYjg0YzEwNDZlODkzNGNjZWRiZjk3ZDZkYWZhOGM3YSt+tw==: --dhchap-ctrl-secret DHHC-1:01:OWEzZTBjOTMzNGE3MmVhZWVkYTA3ZDJlYTU0OTQ2OWNbZz1G: 00:36:14.563 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:36:14.563 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:36:14.563 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:36:14.563 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.563 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:14.563 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.563 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:36:14.563 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:36:14.563 13:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:36:14.823 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:36:14.823 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:36:14.823 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:36:14.823 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:36:14.823 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:36:14.823 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:36:14.823 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key3 00:36:14.823 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.823 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:14.823 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.823 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:36:14.823 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:36:14.823 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:36:15.082 00:36:15.082 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:36:15.082 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:36:15.082 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:36:15.341 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:15.341 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:36:15.341 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.341 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:15.341 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.341 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:36:15.341 { 00:36:15.341 "cntlid": 103, 00:36:15.341 "qid": 0, 00:36:15.341 "state": "enabled", 00:36:15.341 "thread": "nvmf_tgt_poll_group_000", 00:36:15.341 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:36:15.341 "listen_address": { 00:36:15.341 "trtype": "TCP", 00:36:15.341 "adrfam": "IPv4", 00:36:15.341 "traddr": "10.0.0.3", 00:36:15.341 "trsvcid": "4420" 00:36:15.341 }, 00:36:15.341 "peer_address": { 00:36:15.341 "trtype": "TCP", 00:36:15.341 "adrfam": "IPv4", 00:36:15.341 "traddr": "10.0.0.1", 00:36:15.341 "trsvcid": "36838" 00:36:15.341 }, 00:36:15.341 "auth": { 00:36:15.341 "state": "completed", 00:36:15.341 "digest": "sha512", 00:36:15.341 "dhgroup": "null" 00:36:15.341 } 00:36:15.341 } 00:36:15.341 ]' 00:36:15.341 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:36:15.341 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:36:15.341 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:36:15.341 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:36:15.341 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:36:15.602 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:36:15.602 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:36:15.602 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:36:15.602 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Nzc5MWYwYmYwNzgxNjQxNzkyMTBkNGU2OGU1ZTU0OWM5MmQxMmJiNjc5NzMyODczNTEwZjQ3YzVhYjJiZWQxOUnX21U=: 00:36:15.602 13:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:03:Nzc5MWYwYmYwNzgxNjQxNzkyMTBkNGU2OGU1ZTU0OWM5MmQxMmJiNjc5NzMyODczNTEwZjQ3YzVhYjJiZWQxOUnX21U=: 00:36:16.171 13:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:36:16.171 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:36:16.171 13:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:36:16.171 13:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.171 13:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:16.171 13:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.171 13:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:36:16.171 13:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:36:16.171 13:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:16.171 13:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:16.429 13:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:36:16.429 13:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:36:16.429 13:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:36:16.429 13:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:36:16.429 13:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:36:16.429 13:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:36:16.429 13:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:16.429 13:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.429 13:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:16.429 13:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.429 13:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:16.429 13:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:16.429 13:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:16.687 00:36:16.687 13:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:36:16.687 13:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:36:16.687 13:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:36:16.947 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:16.947 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:36:16.947 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.947 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:16.947 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.947 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:36:16.947 { 00:36:16.947 "cntlid": 105, 00:36:16.947 "qid": 0, 00:36:16.947 "state": "enabled", 00:36:16.947 "thread": "nvmf_tgt_poll_group_000", 00:36:16.947 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:36:16.947 "listen_address": { 00:36:16.947 "trtype": "TCP", 00:36:16.947 "adrfam": "IPv4", 00:36:16.947 "traddr": "10.0.0.3", 00:36:16.947 "trsvcid": "4420" 00:36:16.947 }, 00:36:16.947 "peer_address": { 00:36:16.947 "trtype": "TCP", 00:36:16.947 "adrfam": "IPv4", 00:36:16.947 "traddr": "10.0.0.1", 00:36:16.947 "trsvcid": "36874" 00:36:16.947 }, 00:36:16.947 "auth": { 00:36:16.947 "state": "completed", 00:36:16.947 "digest": "sha512", 00:36:16.947 "dhgroup": "ffdhe2048" 00:36:16.947 } 00:36:16.947 } 00:36:16.947 ]' 00:36:16.947 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:36:16.947 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:36:16.947 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:36:17.207 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:36:17.207 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:36:17.207 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:36:17.207 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:36:17.207 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:36:17.466 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDRmMDhhYmJjOTY1NzJiYzYwMzZlMTJmOWQ1YzhmYjU2NjEzOGFmOGRlMGEyOTE5gXY1NA==: --dhchap-ctrl-secret DHHC-1:03:MjNiZjU3YjY2NWU1MTQ1NzNkZWJjMTk0MTRhOTQ5ODI2ZDliMzZmODI3NzExYmE2ODFiMzg1MTFjODc2OTIwZb/bPv4=: 00:36:17.466 13:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:00:NDRmMDhhYmJjOTY1NzJiYzYwMzZlMTJmOWQ1YzhmYjU2NjEzOGFmOGRlMGEyOTE5gXY1NA==: --dhchap-ctrl-secret DHHC-1:03:MjNiZjU3YjY2NWU1MTQ1NzNkZWJjMTk0MTRhOTQ5ODI2ZDliMzZmODI3NzExYmE2ODFiMzg1MTFjODc2OTIwZb/bPv4=: 00:36:18.045 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:36:18.045 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:36:18.045 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:36:18.045 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.045 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:18.045 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.045 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:36:18.045 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:18.045 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:18.045 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:36:18.045 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:36:18.045 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:36:18.045 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:36:18.045 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:36:18.045 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:36:18.045 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:18.045 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.045 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:18.045 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.045 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:18.045 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:18.045 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:18.310 00:36:18.310 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:36:18.310 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:36:18.310 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:36:18.568 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:18.568 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:36:18.568 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.569 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:18.569 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.569 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:36:18.569 { 00:36:18.569 "cntlid": 107, 00:36:18.569 "qid": 0, 00:36:18.569 "state": "enabled", 00:36:18.569 "thread": "nvmf_tgt_poll_group_000", 00:36:18.569 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:36:18.569 "listen_address": { 00:36:18.569 "trtype": "TCP", 00:36:18.569 "adrfam": "IPv4", 00:36:18.569 "traddr": "10.0.0.3", 00:36:18.569 "trsvcid": "4420" 00:36:18.569 }, 00:36:18.569 "peer_address": { 00:36:18.569 "trtype": "TCP", 00:36:18.569 "adrfam": "IPv4", 00:36:18.569 "traddr": "10.0.0.1", 00:36:18.569 "trsvcid": "36892" 00:36:18.569 }, 00:36:18.569 "auth": { 00:36:18.569 "state": "completed", 00:36:18.569 "digest": "sha512", 00:36:18.569 "dhgroup": "ffdhe2048" 00:36:18.569 } 00:36:18.569 } 00:36:18.569 ]' 00:36:18.569 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:36:18.569 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:36:18.569 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:36:18.828 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:36:18.828 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:36:18.828 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:36:18.828 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:36:18.828 13:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:36:19.087 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmFhMjQ2OTI0ZGE4NGFhODZlMTVhMTkyZjlkZWRiNWJI/dgb: --dhchap-ctrl-secret DHHC-1:02:ZDE1MjU0YTE2NTU4YjBjYmU2ZGM2MjNmNDVlZGUyYmIwM2E2ZWU1ODFhMzAwYTM2myPlpA==: 00:36:19.087 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:01:YmFhMjQ2OTI0ZGE4NGFhODZlMTVhMTkyZjlkZWRiNWJI/dgb: --dhchap-ctrl-secret DHHC-1:02:ZDE1MjU0YTE2NTU4YjBjYmU2ZGM2MjNmNDVlZGUyYmIwM2E2ZWU1ODFhMzAwYTM2myPlpA==: 00:36:19.655 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:36:19.655 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:36:19.655 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:36:19.655 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.655 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:19.655 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.655 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:36:19.655 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:19.655 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:19.655 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:36:19.655 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:36:19.655 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:36:19.655 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:36:19.655 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:36:19.655 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:36:19.655 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:19.655 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.655 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:19.915 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.915 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:19.915 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:19.915 13:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:20.174 00:36:20.174 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:36:20.174 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:36:20.174 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:36:20.174 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:20.174 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:36:20.174 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.174 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:20.434 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.434 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:36:20.434 { 00:36:20.434 "cntlid": 109, 00:36:20.434 "qid": 0, 00:36:20.434 "state": "enabled", 00:36:20.434 "thread": "nvmf_tgt_poll_group_000", 00:36:20.434 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:36:20.434 "listen_address": { 00:36:20.434 "trtype": "TCP", 00:36:20.434 "adrfam": "IPv4", 00:36:20.434 "traddr": "10.0.0.3", 00:36:20.434 "trsvcid": "4420" 00:36:20.434 }, 00:36:20.434 "peer_address": { 00:36:20.434 "trtype": "TCP", 00:36:20.434 "adrfam": "IPv4", 00:36:20.434 "traddr": "10.0.0.1", 00:36:20.434 "trsvcid": "36934" 00:36:20.434 }, 00:36:20.434 "auth": { 00:36:20.434 "state": "completed", 00:36:20.434 "digest": "sha512", 00:36:20.434 "dhgroup": "ffdhe2048" 00:36:20.434 } 00:36:20.434 } 00:36:20.434 ]' 00:36:20.434 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:36:20.434 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:36:20.434 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:36:20.434 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:36:20.434 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:36:20.434 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:36:20.434 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:36:20.434 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:36:20.695 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmZkYzlhNjM4ZDAzN2FlYjUzYjg0YzEwNDZlODkzNGNjZWRiZjk3ZDZkYWZhOGM3YSt+tw==: --dhchap-ctrl-secret DHHC-1:01:OWEzZTBjOTMzNGE3MmVhZWVkYTA3ZDJlYTU0OTQ2OWNbZz1G: 00:36:20.695 13:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:02:ZmZkYzlhNjM4ZDAzN2FlYjUzYjg0YzEwNDZlODkzNGNjZWRiZjk3ZDZkYWZhOGM3YSt+tw==: --dhchap-ctrl-secret DHHC-1:01:OWEzZTBjOTMzNGE3MmVhZWVkYTA3ZDJlYTU0OTQ2OWNbZz1G: 00:36:21.264 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:36:21.264 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:36:21.264 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:36:21.264 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.264 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:21.264 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.264 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:36:21.264 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:21.264 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:21.524 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:36:21.524 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:36:21.524 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:36:21.524 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:36:21.524 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:36:21.524 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:36:21.524 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key3 00:36:21.524 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.524 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:21.524 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.524 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:36:21.524 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:36:21.524 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:36:21.786 00:36:21.786 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:36:21.786 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:36:21.786 13:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:36:21.787 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:21.787 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:36:21.787 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.787 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:21.787 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.787 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:36:21.787 { 00:36:21.787 "cntlid": 111, 00:36:21.787 "qid": 0, 00:36:21.787 "state": "enabled", 00:36:21.787 "thread": "nvmf_tgt_poll_group_000", 00:36:21.787 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:36:21.787 "listen_address": { 00:36:21.787 "trtype": "TCP", 00:36:21.787 "adrfam": "IPv4", 00:36:21.787 "traddr": "10.0.0.3", 00:36:21.787 "trsvcid": "4420" 00:36:21.787 }, 00:36:21.787 "peer_address": { 00:36:21.787 "trtype": "TCP", 00:36:21.787 "adrfam": "IPv4", 00:36:21.787 "traddr": "10.0.0.1", 00:36:21.787 "trsvcid": "36958" 00:36:21.787 }, 00:36:21.787 "auth": { 00:36:21.787 "state": "completed", 00:36:21.787 "digest": "sha512", 00:36:21.787 "dhgroup": "ffdhe2048" 00:36:21.787 } 00:36:21.787 } 00:36:21.787 ]' 00:36:21.787 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:36:22.129 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:36:22.129 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:36:22.129 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:36:22.129 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:36:22.129 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:36:22.129 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:36:22.129 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:36:22.388 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Nzc5MWYwYmYwNzgxNjQxNzkyMTBkNGU2OGU1ZTU0OWM5MmQxMmJiNjc5NzMyODczNTEwZjQ3YzVhYjJiZWQxOUnX21U=: 00:36:22.388 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:03:Nzc5MWYwYmYwNzgxNjQxNzkyMTBkNGU2OGU1ZTU0OWM5MmQxMmJiNjc5NzMyODczNTEwZjQ3YzVhYjJiZWQxOUnX21U=: 00:36:22.648 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:36:22.648 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:36:22.648 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:36:22.648 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.648 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:22.909 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.909 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:36:22.909 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:36:22.909 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:22.909 13:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:22.909 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:36:22.909 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:36:22.909 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:36:22.909 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:36:22.909 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:36:22.909 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:36:22.909 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:22.909 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.909 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:22.909 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.909 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:22.909 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:22.909 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:23.168 00:36:23.426 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:36:23.426 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:36:23.426 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:36:23.426 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:23.427 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:36:23.427 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.427 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:23.427 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.427 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:36:23.427 { 00:36:23.427 "cntlid": 113, 00:36:23.427 "qid": 0, 00:36:23.427 "state": "enabled", 00:36:23.427 "thread": "nvmf_tgt_poll_group_000", 00:36:23.427 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:36:23.427 "listen_address": { 00:36:23.427 "trtype": "TCP", 00:36:23.427 "adrfam": "IPv4", 00:36:23.427 "traddr": "10.0.0.3", 00:36:23.427 "trsvcid": "4420" 00:36:23.427 }, 00:36:23.427 "peer_address": { 00:36:23.427 "trtype": "TCP", 00:36:23.427 "adrfam": "IPv4", 00:36:23.427 "traddr": "10.0.0.1", 00:36:23.427 "trsvcid": "44030" 00:36:23.427 }, 00:36:23.427 "auth": { 00:36:23.427 "state": "completed", 00:36:23.427 "digest": "sha512", 00:36:23.427 "dhgroup": "ffdhe3072" 00:36:23.427 } 00:36:23.427 } 00:36:23.427 ]' 00:36:23.427 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:36:23.685 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:36:23.685 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:36:23.685 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:36:23.685 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:36:23.685 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:36:23.685 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:36:23.685 13:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:36:23.944 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDRmMDhhYmJjOTY1NzJiYzYwMzZlMTJmOWQ1YzhmYjU2NjEzOGFmOGRlMGEyOTE5gXY1NA==: --dhchap-ctrl-secret DHHC-1:03:MjNiZjU3YjY2NWU1MTQ1NzNkZWJjMTk0MTRhOTQ5ODI2ZDliMzZmODI3NzExYmE2ODFiMzg1MTFjODc2OTIwZb/bPv4=: 00:36:23.944 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:00:NDRmMDhhYmJjOTY1NzJiYzYwMzZlMTJmOWQ1YzhmYjU2NjEzOGFmOGRlMGEyOTE5gXY1NA==: --dhchap-ctrl-secret DHHC-1:03:MjNiZjU3YjY2NWU1MTQ1NzNkZWJjMTk0MTRhOTQ5ODI2ZDliMzZmODI3NzExYmE2ODFiMzg1MTFjODc2OTIwZb/bPv4=: 00:36:24.511 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:36:24.511 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:36:24.511 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:36:24.511 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.511 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:24.511 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.512 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:36:24.512 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:24.512 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:24.770 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:36:24.770 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:36:24.770 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:36:24.770 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:36:24.770 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:36:24.770 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:36:24.770 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:24.770 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.770 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:24.770 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.770 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:24.770 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:24.771 13:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:25.029 00:36:25.029 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:36:25.029 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:36:25.029 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:36:25.288 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:25.288 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:36:25.288 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.288 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:25.288 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.288 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:36:25.288 { 00:36:25.288 "cntlid": 115, 00:36:25.288 "qid": 0, 00:36:25.288 "state": "enabled", 00:36:25.288 "thread": "nvmf_tgt_poll_group_000", 00:36:25.288 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:36:25.288 "listen_address": { 00:36:25.288 "trtype": "TCP", 00:36:25.288 "adrfam": "IPv4", 00:36:25.288 "traddr": "10.0.0.3", 00:36:25.288 "trsvcid": "4420" 00:36:25.288 }, 00:36:25.288 "peer_address": { 00:36:25.288 "trtype": "TCP", 00:36:25.288 "adrfam": "IPv4", 00:36:25.288 "traddr": "10.0.0.1", 00:36:25.288 "trsvcid": "44048" 00:36:25.288 }, 00:36:25.288 "auth": { 00:36:25.288 "state": "completed", 00:36:25.288 "digest": "sha512", 00:36:25.288 "dhgroup": "ffdhe3072" 00:36:25.288 } 00:36:25.288 } 00:36:25.288 ]' 00:36:25.288 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:36:25.288 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:36:25.288 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:36:25.288 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:36:25.288 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:36:25.288 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:36:25.288 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:36:25.288 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:36:25.547 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmFhMjQ2OTI0ZGE4NGFhODZlMTVhMTkyZjlkZWRiNWJI/dgb: --dhchap-ctrl-secret DHHC-1:02:ZDE1MjU0YTE2NTU4YjBjYmU2ZGM2MjNmNDVlZGUyYmIwM2E2ZWU1ODFhMzAwYTM2myPlpA==: 00:36:25.547 13:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:01:YmFhMjQ2OTI0ZGE4NGFhODZlMTVhMTkyZjlkZWRiNWJI/dgb: --dhchap-ctrl-secret DHHC-1:02:ZDE1MjU0YTE2NTU4YjBjYmU2ZGM2MjNmNDVlZGUyYmIwM2E2ZWU1ODFhMzAwYTM2myPlpA==: 00:36:26.115 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:36:26.115 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:36:26.115 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:36:26.115 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.115 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:26.115 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.115 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:36:26.115 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:26.115 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:26.373 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:36:26.373 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:36:26.373 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:36:26.373 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:36:26.373 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:36:26.373 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:36:26.373 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:26.373 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.373 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:26.373 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.373 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:26.373 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:26.373 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:26.631 00:36:26.631 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:36:26.631 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:36:26.631 13:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:36:26.889 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:26.889 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:36:26.889 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.889 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:26.889 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.889 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:36:26.889 { 00:36:26.889 "cntlid": 117, 00:36:26.889 "qid": 0, 00:36:26.889 "state": "enabled", 00:36:26.889 "thread": "nvmf_tgt_poll_group_000", 00:36:26.889 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:36:26.889 "listen_address": { 00:36:26.889 "trtype": "TCP", 00:36:26.889 "adrfam": "IPv4", 00:36:26.889 "traddr": "10.0.0.3", 00:36:26.889 "trsvcid": "4420" 00:36:26.889 }, 00:36:26.889 "peer_address": { 00:36:26.889 "trtype": "TCP", 00:36:26.889 "adrfam": "IPv4", 00:36:26.889 "traddr": "10.0.0.1", 00:36:26.889 "trsvcid": "44074" 00:36:26.889 }, 00:36:26.889 "auth": { 00:36:26.889 "state": "completed", 00:36:26.889 "digest": "sha512", 00:36:26.889 "dhgroup": "ffdhe3072" 00:36:26.889 } 00:36:26.889 } 00:36:26.889 ]' 00:36:26.889 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:36:27.148 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:36:27.148 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:36:27.148 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:36:27.148 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:36:27.148 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:36:27.148 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:36:27.148 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:36:27.406 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmZkYzlhNjM4ZDAzN2FlYjUzYjg0YzEwNDZlODkzNGNjZWRiZjk3ZDZkYWZhOGM3YSt+tw==: --dhchap-ctrl-secret DHHC-1:01:OWEzZTBjOTMzNGE3MmVhZWVkYTA3ZDJlYTU0OTQ2OWNbZz1G: 00:36:27.406 13:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:02:ZmZkYzlhNjM4ZDAzN2FlYjUzYjg0YzEwNDZlODkzNGNjZWRiZjk3ZDZkYWZhOGM3YSt+tw==: --dhchap-ctrl-secret DHHC-1:01:OWEzZTBjOTMzNGE3MmVhZWVkYTA3ZDJlYTU0OTQ2OWNbZz1G: 00:36:27.973 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:36:27.973 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:36:27.973 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:36:27.973 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.973 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:27.973 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.973 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:36:27.973 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:27.973 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:28.231 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:36:28.231 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:36:28.231 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:36:28.232 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:36:28.232 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:36:28.232 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:36:28.232 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key3 00:36:28.232 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.232 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:28.232 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.232 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:36:28.232 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:36:28.232 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:36:28.490 00:36:28.748 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:36:28.748 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:36:28.748 13:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:36:29.007 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:29.007 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:36:29.007 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.007 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:29.007 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.007 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:36:29.007 { 00:36:29.007 "cntlid": 119, 00:36:29.007 "qid": 0, 00:36:29.007 "state": "enabled", 00:36:29.007 "thread": "nvmf_tgt_poll_group_000", 00:36:29.007 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:36:29.007 "listen_address": { 00:36:29.007 "trtype": "TCP", 00:36:29.007 "adrfam": "IPv4", 00:36:29.007 "traddr": "10.0.0.3", 00:36:29.007 "trsvcid": "4420" 00:36:29.007 }, 00:36:29.007 "peer_address": { 00:36:29.007 "trtype": "TCP", 00:36:29.007 "adrfam": "IPv4", 00:36:29.007 "traddr": "10.0.0.1", 00:36:29.007 "trsvcid": "44106" 00:36:29.007 }, 00:36:29.007 "auth": { 00:36:29.007 "state": "completed", 00:36:29.007 "digest": "sha512", 00:36:29.007 "dhgroup": "ffdhe3072" 00:36:29.007 } 00:36:29.007 } 00:36:29.007 ]' 00:36:29.007 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:36:29.007 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:36:29.007 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:36:29.007 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:36:29.007 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:36:29.007 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:36:29.007 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:36:29.007 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:36:29.272 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Nzc5MWYwYmYwNzgxNjQxNzkyMTBkNGU2OGU1ZTU0OWM5MmQxMmJiNjc5NzMyODczNTEwZjQ3YzVhYjJiZWQxOUnX21U=: 00:36:29.272 13:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:03:Nzc5MWYwYmYwNzgxNjQxNzkyMTBkNGU2OGU1ZTU0OWM5MmQxMmJiNjc5NzMyODczNTEwZjQ3YzVhYjJiZWQxOUnX21U=: 00:36:29.871 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:36:29.871 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:36:29.871 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:36:29.871 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.871 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:29.871 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.871 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:36:29.871 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:36:29.871 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:29.871 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:30.128 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:36:30.128 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:36:30.128 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:36:30.128 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:36:30.128 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:36:30.128 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:36:30.128 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:30.128 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.128 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:30.128 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.128 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:30.128 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:30.128 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:30.386 00:36:30.386 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:36:30.386 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:36:30.386 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:36:30.644 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:30.644 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:36:30.644 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.644 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:30.644 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.644 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:36:30.644 { 00:36:30.644 "cntlid": 121, 00:36:30.644 "qid": 0, 00:36:30.644 "state": "enabled", 00:36:30.644 "thread": "nvmf_tgt_poll_group_000", 00:36:30.644 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:36:30.644 "listen_address": { 00:36:30.644 "trtype": "TCP", 00:36:30.644 "adrfam": "IPv4", 00:36:30.644 "traddr": "10.0.0.3", 00:36:30.644 "trsvcid": "4420" 00:36:30.644 }, 00:36:30.644 "peer_address": { 00:36:30.644 "trtype": "TCP", 00:36:30.644 "adrfam": "IPv4", 00:36:30.644 "traddr": "10.0.0.1", 00:36:30.644 "trsvcid": "44136" 00:36:30.644 }, 00:36:30.644 "auth": { 00:36:30.644 "state": "completed", 00:36:30.644 "digest": "sha512", 00:36:30.644 "dhgroup": "ffdhe4096" 00:36:30.644 } 00:36:30.644 } 00:36:30.644 ]' 00:36:30.644 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:36:30.902 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:36:30.902 13:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:36:30.902 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:36:30.902 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:36:30.902 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:36:30.902 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:36:30.902 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:36:31.160 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDRmMDhhYmJjOTY1NzJiYzYwMzZlMTJmOWQ1YzhmYjU2NjEzOGFmOGRlMGEyOTE5gXY1NA==: --dhchap-ctrl-secret DHHC-1:03:MjNiZjU3YjY2NWU1MTQ1NzNkZWJjMTk0MTRhOTQ5ODI2ZDliMzZmODI3NzExYmE2ODFiMzg1MTFjODc2OTIwZb/bPv4=: 00:36:31.160 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:00:NDRmMDhhYmJjOTY1NzJiYzYwMzZlMTJmOWQ1YzhmYjU2NjEzOGFmOGRlMGEyOTE5gXY1NA==: --dhchap-ctrl-secret DHHC-1:03:MjNiZjU3YjY2NWU1MTQ1NzNkZWJjMTk0MTRhOTQ5ODI2ZDliMzZmODI3NzExYmE2ODFiMzg1MTFjODc2OTIwZb/bPv4=: 00:36:31.751 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:36:31.751 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:36:31.751 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:36:31.751 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.751 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:31.751 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.751 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:36:31.751 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:31.751 13:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:31.751 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:36:31.751 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:36:31.751 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:36:31.751 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:36:31.751 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:36:31.751 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:36:31.752 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:31.752 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.752 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:31.752 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.752 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:31.752 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:31.752 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:32.012 00:36:32.012 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:36:32.012 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:36:32.012 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:36:32.268 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:32.268 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:36:32.268 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.268 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:32.268 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.268 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:36:32.268 { 00:36:32.268 "cntlid": 123, 00:36:32.268 "qid": 0, 00:36:32.268 "state": "enabled", 00:36:32.268 "thread": "nvmf_tgt_poll_group_000", 00:36:32.268 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:36:32.268 "listen_address": { 00:36:32.268 "trtype": "TCP", 00:36:32.269 "adrfam": "IPv4", 00:36:32.269 "traddr": "10.0.0.3", 00:36:32.269 "trsvcid": "4420" 00:36:32.269 }, 00:36:32.269 "peer_address": { 00:36:32.269 "trtype": "TCP", 00:36:32.269 "adrfam": "IPv4", 00:36:32.269 "traddr": "10.0.0.1", 00:36:32.269 "trsvcid": "44174" 00:36:32.269 }, 00:36:32.269 "auth": { 00:36:32.269 "state": "completed", 00:36:32.269 "digest": "sha512", 00:36:32.269 "dhgroup": "ffdhe4096" 00:36:32.269 } 00:36:32.269 } 00:36:32.269 ]' 00:36:32.269 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:36:32.269 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:36:32.269 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:36:32.526 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:36:32.526 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:36:32.526 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:36:32.526 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:36:32.526 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:36:32.783 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmFhMjQ2OTI0ZGE4NGFhODZlMTVhMTkyZjlkZWRiNWJI/dgb: --dhchap-ctrl-secret DHHC-1:02:ZDE1MjU0YTE2NTU4YjBjYmU2ZGM2MjNmNDVlZGUyYmIwM2E2ZWU1ODFhMzAwYTM2myPlpA==: 00:36:32.783 13:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:01:YmFhMjQ2OTI0ZGE4NGFhODZlMTVhMTkyZjlkZWRiNWJI/dgb: --dhchap-ctrl-secret DHHC-1:02:ZDE1MjU0YTE2NTU4YjBjYmU2ZGM2MjNmNDVlZGUyYmIwM2E2ZWU1ODFhMzAwYTM2myPlpA==: 00:36:33.347 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:36:33.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:36:33.347 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:36:33.347 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.347 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:33.347 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.347 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:36:33.347 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:33.347 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:33.347 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:36:33.347 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:36:33.347 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:36:33.347 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:36:33.347 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:36:33.347 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:36:33.347 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:33.347 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.347 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:33.347 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.347 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:33.348 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:33.348 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:33.914 00:36:33.914 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:36:33.914 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:36:33.914 13:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:36:33.914 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:33.914 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:36:33.914 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.914 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:33.914 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.914 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:36:33.914 { 00:36:33.914 "cntlid": 125, 00:36:33.914 "qid": 0, 00:36:33.914 "state": "enabled", 00:36:33.914 "thread": "nvmf_tgt_poll_group_000", 00:36:33.914 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:36:33.914 "listen_address": { 00:36:33.914 "trtype": "TCP", 00:36:33.914 "adrfam": "IPv4", 00:36:33.914 "traddr": "10.0.0.3", 00:36:33.914 "trsvcid": "4420" 00:36:33.914 }, 00:36:33.914 "peer_address": { 00:36:33.914 "trtype": "TCP", 00:36:33.914 "adrfam": "IPv4", 00:36:33.914 "traddr": "10.0.0.1", 00:36:33.914 "trsvcid": "49504" 00:36:33.914 }, 00:36:33.914 "auth": { 00:36:33.914 "state": "completed", 00:36:33.914 "digest": "sha512", 00:36:33.914 "dhgroup": "ffdhe4096" 00:36:33.914 } 00:36:33.914 } 00:36:33.914 ]' 00:36:33.914 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:36:33.914 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:36:33.914 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:36:34.173 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:36:34.173 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:36:34.173 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:36:34.173 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:36:34.173 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:36:34.434 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmZkYzlhNjM4ZDAzN2FlYjUzYjg0YzEwNDZlODkzNGNjZWRiZjk3ZDZkYWZhOGM3YSt+tw==: --dhchap-ctrl-secret DHHC-1:01:OWEzZTBjOTMzNGE3MmVhZWVkYTA3ZDJlYTU0OTQ2OWNbZz1G: 00:36:34.434 13:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:02:ZmZkYzlhNjM4ZDAzN2FlYjUzYjg0YzEwNDZlODkzNGNjZWRiZjk3ZDZkYWZhOGM3YSt+tw==: --dhchap-ctrl-secret DHHC-1:01:OWEzZTBjOTMzNGE3MmVhZWVkYTA3ZDJlYTU0OTQ2OWNbZz1G: 00:36:35.004 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:36:35.004 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:36:35.004 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:36:35.004 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.004 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:35.004 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.004 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:36:35.004 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:35.004 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:35.004 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:36:35.004 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:36:35.004 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:36:35.004 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:36:35.004 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:36:35.004 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:36:35.004 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key3 00:36:35.004 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.004 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:35.004 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.004 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:36:35.004 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:36:35.004 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:36:35.264 00:36:35.524 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:36:35.524 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:36:35.524 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:36:35.524 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:35.524 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:36:35.524 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.524 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:35.785 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.785 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:36:35.785 { 00:36:35.785 "cntlid": 127, 00:36:35.785 "qid": 0, 00:36:35.785 "state": "enabled", 00:36:35.785 "thread": "nvmf_tgt_poll_group_000", 00:36:35.785 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:36:35.785 "listen_address": { 00:36:35.785 "trtype": "TCP", 00:36:35.785 "adrfam": "IPv4", 00:36:35.785 "traddr": "10.0.0.3", 00:36:35.785 "trsvcid": "4420" 00:36:35.785 }, 00:36:35.785 "peer_address": { 00:36:35.785 "trtype": "TCP", 00:36:35.785 "adrfam": "IPv4", 00:36:35.785 "traddr": "10.0.0.1", 00:36:35.785 "trsvcid": "49542" 00:36:35.785 }, 00:36:35.785 "auth": { 00:36:35.785 "state": "completed", 00:36:35.785 "digest": "sha512", 00:36:35.785 "dhgroup": "ffdhe4096" 00:36:35.785 } 00:36:35.785 } 00:36:35.785 ]' 00:36:35.785 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:36:35.785 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:36:35.785 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:36:35.785 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:36:35.785 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:36:35.785 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:36:35.785 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:36:35.785 13:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:36:36.044 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Nzc5MWYwYmYwNzgxNjQxNzkyMTBkNGU2OGU1ZTU0OWM5MmQxMmJiNjc5NzMyODczNTEwZjQ3YzVhYjJiZWQxOUnX21U=: 00:36:36.044 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:03:Nzc5MWYwYmYwNzgxNjQxNzkyMTBkNGU2OGU1ZTU0OWM5MmQxMmJiNjc5NzMyODczNTEwZjQ3YzVhYjJiZWQxOUnX21U=: 00:36:36.684 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:36:36.684 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:36:36.684 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:36:36.684 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.684 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:36.684 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.684 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:36:36.684 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:36:36.684 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:36.684 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:36.684 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:36:36.684 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:36:36.684 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:36:36.684 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:36:36.684 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:36:36.684 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:36:36.684 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:36.684 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.684 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:36.684 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.684 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:36.684 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:36.684 13:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:37.255 00:36:37.255 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:36:37.255 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:36:37.255 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:36:37.255 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:37.255 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:36:37.255 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.255 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:37.255 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.255 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:36:37.255 { 00:36:37.255 "cntlid": 129, 00:36:37.255 "qid": 0, 00:36:37.255 "state": "enabled", 00:36:37.255 "thread": "nvmf_tgt_poll_group_000", 00:36:37.255 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:36:37.255 "listen_address": { 00:36:37.255 "trtype": "TCP", 00:36:37.255 "adrfam": "IPv4", 00:36:37.255 "traddr": "10.0.0.3", 00:36:37.255 "trsvcid": "4420" 00:36:37.255 }, 00:36:37.255 "peer_address": { 00:36:37.255 "trtype": "TCP", 00:36:37.255 "adrfam": "IPv4", 00:36:37.255 "traddr": "10.0.0.1", 00:36:37.255 "trsvcid": "49558" 00:36:37.255 }, 00:36:37.255 "auth": { 00:36:37.255 "state": "completed", 00:36:37.255 "digest": "sha512", 00:36:37.255 "dhgroup": "ffdhe6144" 00:36:37.255 } 00:36:37.255 } 00:36:37.255 ]' 00:36:37.255 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:36:37.255 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:36:37.255 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:36:37.515 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:36:37.515 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:36:37.515 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:36:37.515 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:36:37.515 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:36:37.515 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDRmMDhhYmJjOTY1NzJiYzYwMzZlMTJmOWQ1YzhmYjU2NjEzOGFmOGRlMGEyOTE5gXY1NA==: --dhchap-ctrl-secret DHHC-1:03:MjNiZjU3YjY2NWU1MTQ1NzNkZWJjMTk0MTRhOTQ5ODI2ZDliMzZmODI3NzExYmE2ODFiMzg1MTFjODc2OTIwZb/bPv4=: 00:36:37.515 13:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:00:NDRmMDhhYmJjOTY1NzJiYzYwMzZlMTJmOWQ1YzhmYjU2NjEzOGFmOGRlMGEyOTE5gXY1NA==: --dhchap-ctrl-secret DHHC-1:03:MjNiZjU3YjY2NWU1MTQ1NzNkZWJjMTk0MTRhOTQ5ODI2ZDliMzZmODI3NzExYmE2ODFiMzg1MTFjODc2OTIwZb/bPv4=: 00:36:38.451 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:36:38.451 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:36:38.451 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:36:38.451 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.451 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:38.451 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.451 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:36:38.451 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:38.451 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:38.451 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:36:38.451 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:36:38.451 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:36:38.451 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:36:38.451 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:36:38.451 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:36:38.451 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:38.451 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.451 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:38.451 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.451 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:38.451 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:38.451 13:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:39.017 00:36:39.017 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:36:39.017 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:36:39.017 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:36:39.017 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:39.017 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:36:39.017 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.017 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:39.017 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.017 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:36:39.017 { 00:36:39.017 "cntlid": 131, 00:36:39.017 "qid": 0, 00:36:39.017 "state": "enabled", 00:36:39.017 "thread": "nvmf_tgt_poll_group_000", 00:36:39.017 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:36:39.017 "listen_address": { 00:36:39.017 "trtype": "TCP", 00:36:39.017 "adrfam": "IPv4", 00:36:39.017 "traddr": "10.0.0.3", 00:36:39.017 "trsvcid": "4420" 00:36:39.017 }, 00:36:39.017 "peer_address": { 00:36:39.017 "trtype": "TCP", 00:36:39.017 "adrfam": "IPv4", 00:36:39.017 "traddr": "10.0.0.1", 00:36:39.017 "trsvcid": "49588" 00:36:39.017 }, 00:36:39.017 "auth": { 00:36:39.017 "state": "completed", 00:36:39.017 "digest": "sha512", 00:36:39.017 "dhgroup": "ffdhe6144" 00:36:39.017 } 00:36:39.017 } 00:36:39.017 ]' 00:36:39.017 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:36:39.275 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:36:39.275 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:36:39.275 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:36:39.275 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:36:39.275 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:36:39.275 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:36:39.275 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:36:39.533 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmFhMjQ2OTI0ZGE4NGFhODZlMTVhMTkyZjlkZWRiNWJI/dgb: --dhchap-ctrl-secret DHHC-1:02:ZDE1MjU0YTE2NTU4YjBjYmU2ZGM2MjNmNDVlZGUyYmIwM2E2ZWU1ODFhMzAwYTM2myPlpA==: 00:36:39.533 13:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:01:YmFhMjQ2OTI0ZGE4NGFhODZlMTVhMTkyZjlkZWRiNWJI/dgb: --dhchap-ctrl-secret DHHC-1:02:ZDE1MjU0YTE2NTU4YjBjYmU2ZGM2MjNmNDVlZGUyYmIwM2E2ZWU1ODFhMzAwYTM2myPlpA==: 00:36:40.102 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:36:40.102 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:36:40.102 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:36:40.102 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.102 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:40.102 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.102 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:36:40.102 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:40.102 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:40.102 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:36:40.102 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:36:40.102 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:36:40.102 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:36:40.102 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:36:40.102 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:36:40.102 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:40.102 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.102 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:40.361 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.361 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:40.361 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:40.361 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:40.619 00:36:40.619 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:36:40.619 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:36:40.619 13:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:36:40.879 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:40.879 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:36:40.879 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.879 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:40.879 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.879 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:36:40.879 { 00:36:40.879 "cntlid": 133, 00:36:40.879 "qid": 0, 00:36:40.879 "state": "enabled", 00:36:40.879 "thread": "nvmf_tgt_poll_group_000", 00:36:40.879 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:36:40.879 "listen_address": { 00:36:40.879 "trtype": "TCP", 00:36:40.879 "adrfam": "IPv4", 00:36:40.879 "traddr": "10.0.0.3", 00:36:40.879 "trsvcid": "4420" 00:36:40.879 }, 00:36:40.879 "peer_address": { 00:36:40.879 "trtype": "TCP", 00:36:40.879 "adrfam": "IPv4", 00:36:40.879 "traddr": "10.0.0.1", 00:36:40.879 "trsvcid": "49620" 00:36:40.879 }, 00:36:40.879 "auth": { 00:36:40.879 "state": "completed", 00:36:40.879 "digest": "sha512", 00:36:40.879 "dhgroup": "ffdhe6144" 00:36:40.879 } 00:36:40.879 } 00:36:40.879 ]' 00:36:40.879 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:36:40.879 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:36:40.879 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:36:40.879 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:36:40.879 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:36:40.879 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:36:40.879 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:36:41.138 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:36:41.138 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmZkYzlhNjM4ZDAzN2FlYjUzYjg0YzEwNDZlODkzNGNjZWRiZjk3ZDZkYWZhOGM3YSt+tw==: --dhchap-ctrl-secret DHHC-1:01:OWEzZTBjOTMzNGE3MmVhZWVkYTA3ZDJlYTU0OTQ2OWNbZz1G: 00:36:41.138 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:02:ZmZkYzlhNjM4ZDAzN2FlYjUzYjg0YzEwNDZlODkzNGNjZWRiZjk3ZDZkYWZhOGM3YSt+tw==: --dhchap-ctrl-secret DHHC-1:01:OWEzZTBjOTMzNGE3MmVhZWVkYTA3ZDJlYTU0OTQ2OWNbZz1G: 00:36:41.705 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:36:41.705 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:36:41.705 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:36:41.705 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.706 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:41.706 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.706 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:36:41.706 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:41.706 13:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:41.964 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:36:41.964 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:36:41.964 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:36:41.964 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:36:41.964 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:36:41.964 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:36:41.964 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key3 00:36:41.964 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.964 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:41.964 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.964 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:36:41.964 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:36:41.964 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:36:42.530 00:36:42.530 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:36:42.530 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:36:42.530 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:36:42.788 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:42.788 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:36:42.788 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.788 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:42.788 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.788 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:36:42.788 { 00:36:42.788 "cntlid": 135, 00:36:42.788 "qid": 0, 00:36:42.788 "state": "enabled", 00:36:42.788 "thread": "nvmf_tgt_poll_group_000", 00:36:42.788 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:36:42.788 "listen_address": { 00:36:42.788 "trtype": "TCP", 00:36:42.788 "adrfam": "IPv4", 00:36:42.788 "traddr": "10.0.0.3", 00:36:42.788 "trsvcid": "4420" 00:36:42.788 }, 00:36:42.788 "peer_address": { 00:36:42.788 "trtype": "TCP", 00:36:42.788 "adrfam": "IPv4", 00:36:42.788 "traddr": "10.0.0.1", 00:36:42.788 "trsvcid": "49632" 00:36:42.788 }, 00:36:42.788 "auth": { 00:36:42.788 "state": "completed", 00:36:42.788 "digest": "sha512", 00:36:42.788 "dhgroup": "ffdhe6144" 00:36:42.788 } 00:36:42.788 } 00:36:42.788 ]' 00:36:42.788 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:36:42.788 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:36:42.788 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:36:42.788 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:36:42.788 13:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:36:42.788 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:36:42.788 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:36:42.788 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:36:43.046 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Nzc5MWYwYmYwNzgxNjQxNzkyMTBkNGU2OGU1ZTU0OWM5MmQxMmJiNjc5NzMyODczNTEwZjQ3YzVhYjJiZWQxOUnX21U=: 00:36:43.046 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:03:Nzc5MWYwYmYwNzgxNjQxNzkyMTBkNGU2OGU1ZTU0OWM5MmQxMmJiNjc5NzMyODczNTEwZjQ3YzVhYjJiZWQxOUnX21U=: 00:36:43.612 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:36:43.613 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:36:43.613 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:36:43.613 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.613 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:43.613 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.613 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:36:43.613 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:36:43.613 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:43.613 13:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:43.871 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:36:43.871 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:36:43.871 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:36:43.871 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:36:43.871 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:36:43.871 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:36:43.871 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:43.871 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.871 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:43.871 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.871 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:43.871 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:43.871 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:44.466 00:36:44.466 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:36:44.466 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:36:44.466 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:36:44.725 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:44.725 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:36:44.725 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.725 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:44.725 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.725 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:36:44.725 { 00:36:44.725 "cntlid": 137, 00:36:44.725 "qid": 0, 00:36:44.725 "state": "enabled", 00:36:44.725 "thread": "nvmf_tgt_poll_group_000", 00:36:44.725 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:36:44.725 "listen_address": { 00:36:44.725 "trtype": "TCP", 00:36:44.725 "adrfam": "IPv4", 00:36:44.725 "traddr": "10.0.0.3", 00:36:44.725 "trsvcid": "4420" 00:36:44.725 }, 00:36:44.725 "peer_address": { 00:36:44.725 "trtype": "TCP", 00:36:44.725 "adrfam": "IPv4", 00:36:44.725 "traddr": "10.0.0.1", 00:36:44.725 "trsvcid": "39920" 00:36:44.725 }, 00:36:44.725 "auth": { 00:36:44.725 "state": "completed", 00:36:44.726 "digest": "sha512", 00:36:44.726 "dhgroup": "ffdhe8192" 00:36:44.726 } 00:36:44.726 } 00:36:44.726 ]' 00:36:44.726 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:36:44.726 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:36:44.726 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:36:44.726 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:36:44.726 13:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:36:44.726 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:36:44.726 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:36:44.726 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:36:44.985 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDRmMDhhYmJjOTY1NzJiYzYwMzZlMTJmOWQ1YzhmYjU2NjEzOGFmOGRlMGEyOTE5gXY1NA==: --dhchap-ctrl-secret DHHC-1:03:MjNiZjU3YjY2NWU1MTQ1NzNkZWJjMTk0MTRhOTQ5ODI2ZDliMzZmODI3NzExYmE2ODFiMzg1MTFjODc2OTIwZb/bPv4=: 00:36:44.985 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:00:NDRmMDhhYmJjOTY1NzJiYzYwMzZlMTJmOWQ1YzhmYjU2NjEzOGFmOGRlMGEyOTE5gXY1NA==: --dhchap-ctrl-secret DHHC-1:03:MjNiZjU3YjY2NWU1MTQ1NzNkZWJjMTk0MTRhOTQ5ODI2ZDliMzZmODI3NzExYmE2ODFiMzg1MTFjODc2OTIwZb/bPv4=: 00:36:45.554 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:36:45.554 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:36:45.554 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:36:45.554 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.554 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:45.554 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.554 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:36:45.554 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:45.554 13:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:45.813 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:36:45.813 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:36:45.813 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:36:45.813 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:36:45.813 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:36:45.813 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:36:45.813 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:45.813 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.813 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:45.813 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.813 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:45.813 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:45.813 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:46.381 00:36:46.381 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:36:46.381 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:36:46.381 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:36:46.641 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:46.641 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:36:46.641 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.642 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:46.642 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.642 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:36:46.642 { 00:36:46.642 "cntlid": 139, 00:36:46.642 "qid": 0, 00:36:46.642 "state": "enabled", 00:36:46.642 "thread": "nvmf_tgt_poll_group_000", 00:36:46.642 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:36:46.642 "listen_address": { 00:36:46.642 "trtype": "TCP", 00:36:46.642 "adrfam": "IPv4", 00:36:46.642 "traddr": "10.0.0.3", 00:36:46.642 "trsvcid": "4420" 00:36:46.642 }, 00:36:46.642 "peer_address": { 00:36:46.642 "trtype": "TCP", 00:36:46.642 "adrfam": "IPv4", 00:36:46.642 "traddr": "10.0.0.1", 00:36:46.642 "trsvcid": "39954" 00:36:46.642 }, 00:36:46.642 "auth": { 00:36:46.642 "state": "completed", 00:36:46.642 "digest": "sha512", 00:36:46.642 "dhgroup": "ffdhe8192" 00:36:46.642 } 00:36:46.642 } 00:36:46.642 ]' 00:36:46.642 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:36:46.642 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:36:46.642 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:36:46.642 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:36:46.642 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:36:46.901 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:36:46.901 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:36:46.901 13:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:36:46.901 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmFhMjQ2OTI0ZGE4NGFhODZlMTVhMTkyZjlkZWRiNWJI/dgb: --dhchap-ctrl-secret DHHC-1:02:ZDE1MjU0YTE2NTU4YjBjYmU2ZGM2MjNmNDVlZGUyYmIwM2E2ZWU1ODFhMzAwYTM2myPlpA==: 00:36:46.901 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:01:YmFhMjQ2OTI0ZGE4NGFhODZlMTVhMTkyZjlkZWRiNWJI/dgb: --dhchap-ctrl-secret DHHC-1:02:ZDE1MjU0YTE2NTU4YjBjYmU2ZGM2MjNmNDVlZGUyYmIwM2E2ZWU1ODFhMzAwYTM2myPlpA==: 00:36:47.470 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:36:47.470 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:36:47.728 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:36:47.728 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.728 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:47.728 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.728 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:36:47.728 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:47.728 13:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:47.728 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:36:47.728 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:36:47.728 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:36:47.728 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:36:47.728 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:36:47.728 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:36:47.728 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:47.728 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.728 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:47.987 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.987 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:47.987 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:47.987 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:48.246 00:36:48.504 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:36:48.504 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:36:48.504 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:36:48.504 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:48.504 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:36:48.504 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.504 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:48.504 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.504 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:36:48.504 { 00:36:48.504 "cntlid": 141, 00:36:48.504 "qid": 0, 00:36:48.504 "state": "enabled", 00:36:48.504 "thread": "nvmf_tgt_poll_group_000", 00:36:48.504 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:36:48.504 "listen_address": { 00:36:48.504 "trtype": "TCP", 00:36:48.504 "adrfam": "IPv4", 00:36:48.504 "traddr": "10.0.0.3", 00:36:48.504 "trsvcid": "4420" 00:36:48.504 }, 00:36:48.504 "peer_address": { 00:36:48.504 "trtype": "TCP", 00:36:48.504 "adrfam": "IPv4", 00:36:48.504 "traddr": "10.0.0.1", 00:36:48.504 "trsvcid": "39994" 00:36:48.504 }, 00:36:48.504 "auth": { 00:36:48.504 "state": "completed", 00:36:48.504 "digest": "sha512", 00:36:48.504 "dhgroup": "ffdhe8192" 00:36:48.504 } 00:36:48.504 } 00:36:48.504 ]' 00:36:48.504 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:36:48.762 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:36:48.762 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:36:48.762 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:36:48.762 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:36:48.762 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:36:48.762 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:36:48.762 13:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:36:49.021 13:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmZkYzlhNjM4ZDAzN2FlYjUzYjg0YzEwNDZlODkzNGNjZWRiZjk3ZDZkYWZhOGM3YSt+tw==: --dhchap-ctrl-secret DHHC-1:01:OWEzZTBjOTMzNGE3MmVhZWVkYTA3ZDJlYTU0OTQ2OWNbZz1G: 00:36:49.021 13:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:02:ZmZkYzlhNjM4ZDAzN2FlYjUzYjg0YzEwNDZlODkzNGNjZWRiZjk3ZDZkYWZhOGM3YSt+tw==: --dhchap-ctrl-secret DHHC-1:01:OWEzZTBjOTMzNGE3MmVhZWVkYTA3ZDJlYTU0OTQ2OWNbZz1G: 00:36:49.588 13:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:36:49.588 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:36:49.588 13:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:36:49.588 13:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.588 13:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:49.588 13:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.588 13:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:36:49.588 13:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:49.589 13:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:49.848 13:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:36:49.848 13:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:36:49.848 13:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:36:49.848 13:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:36:49.848 13:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:36:49.848 13:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:36:49.848 13:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key3 00:36:49.848 13:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.848 13:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:49.848 13:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.848 13:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:36:49.848 13:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:36:49.848 13:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:36:50.425 00:36:50.425 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:36:50.425 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:36:50.425 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:36:50.699 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:50.699 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:36:50.699 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.699 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:50.699 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.699 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:36:50.699 { 00:36:50.699 "cntlid": 143, 00:36:50.699 "qid": 0, 00:36:50.699 "state": "enabled", 00:36:50.699 "thread": "nvmf_tgt_poll_group_000", 00:36:50.699 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:36:50.699 "listen_address": { 00:36:50.699 "trtype": "TCP", 00:36:50.699 "adrfam": "IPv4", 00:36:50.699 "traddr": "10.0.0.3", 00:36:50.699 "trsvcid": "4420" 00:36:50.699 }, 00:36:50.699 "peer_address": { 00:36:50.699 "trtype": "TCP", 00:36:50.699 "adrfam": "IPv4", 00:36:50.699 "traddr": "10.0.0.1", 00:36:50.699 "trsvcid": "40018" 00:36:50.699 }, 00:36:50.699 "auth": { 00:36:50.699 "state": "completed", 00:36:50.699 "digest": "sha512", 00:36:50.699 "dhgroup": "ffdhe8192" 00:36:50.699 } 00:36:50.699 } 00:36:50.699 ]' 00:36:50.699 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:36:50.699 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:36:50.699 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:36:50.699 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:36:50.700 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:36:50.700 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:36:50.700 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:36:50.700 13:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:36:50.958 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Nzc5MWYwYmYwNzgxNjQxNzkyMTBkNGU2OGU1ZTU0OWM5MmQxMmJiNjc5NzMyODczNTEwZjQ3YzVhYjJiZWQxOUnX21U=: 00:36:50.958 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:03:Nzc5MWYwYmYwNzgxNjQxNzkyMTBkNGU2OGU1ZTU0OWM5MmQxMmJiNjc5NzMyODczNTEwZjQ3YzVhYjJiZWQxOUnX21U=: 00:36:51.526 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:36:51.526 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:36:51.527 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:36:51.527 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.527 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:51.527 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:51.527 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:36:51.527 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:36:51.527 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:36:51.527 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:36:51.527 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:36:51.527 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:36:51.785 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:36:51.785 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:36:51.785 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:36:51.785 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:36:51.785 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:36:51.785 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:36:51.785 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:51.785 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.785 13:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:51.785 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:51.785 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:51.785 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:51.785 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:52.368 00:36:52.368 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:36:52.368 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:36:52.368 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:36:52.626 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:52.626 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:36:52.626 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.626 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:52.626 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.626 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:36:52.626 { 00:36:52.626 "cntlid": 145, 00:36:52.626 "qid": 0, 00:36:52.626 "state": "enabled", 00:36:52.626 "thread": "nvmf_tgt_poll_group_000", 00:36:52.626 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:36:52.626 "listen_address": { 00:36:52.626 "trtype": "TCP", 00:36:52.626 "adrfam": "IPv4", 00:36:52.626 "traddr": "10.0.0.3", 00:36:52.626 "trsvcid": "4420" 00:36:52.626 }, 00:36:52.626 "peer_address": { 00:36:52.626 "trtype": "TCP", 00:36:52.626 "adrfam": "IPv4", 00:36:52.626 "traddr": "10.0.0.1", 00:36:52.626 "trsvcid": "40048" 00:36:52.626 }, 00:36:52.626 "auth": { 00:36:52.626 "state": "completed", 00:36:52.626 "digest": "sha512", 00:36:52.626 "dhgroup": "ffdhe8192" 00:36:52.626 } 00:36:52.626 } 00:36:52.626 ]' 00:36:52.626 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:36:52.626 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:36:52.626 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:36:52.884 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:36:52.884 13:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:36:52.884 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:36:52.884 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:36:52.884 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:36:53.143 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDRmMDhhYmJjOTY1NzJiYzYwMzZlMTJmOWQ1YzhmYjU2NjEzOGFmOGRlMGEyOTE5gXY1NA==: --dhchap-ctrl-secret DHHC-1:03:MjNiZjU3YjY2NWU1MTQ1NzNkZWJjMTk0MTRhOTQ5ODI2ZDliMzZmODI3NzExYmE2ODFiMzg1MTFjODc2OTIwZb/bPv4=: 00:36:53.143 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:00:NDRmMDhhYmJjOTY1NzJiYzYwMzZlMTJmOWQ1YzhmYjU2NjEzOGFmOGRlMGEyOTE5gXY1NA==: --dhchap-ctrl-secret DHHC-1:03:MjNiZjU3YjY2NWU1MTQ1NzNkZWJjMTk0MTRhOTQ5ODI2ZDliMzZmODI3NzExYmE2ODFiMzg1MTFjODc2OTIwZb/bPv4=: 00:36:53.710 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:36:53.710 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:36:53.710 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:36:53.710 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.710 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:53.710 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.710 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key1 00:36:53.710 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.710 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:53.710 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.710 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:36:53.710 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:36:53.710 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:36:53.710 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:36:53.710 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:53.710 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:36:53.710 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:53.710 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:36:53.710 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:36:53.710 13:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:36:54.278 request: 00:36:54.278 { 00:36:54.278 "name": "nvme0", 00:36:54.278 "trtype": "tcp", 00:36:54.278 "traddr": "10.0.0.3", 00:36:54.278 "adrfam": "ipv4", 00:36:54.278 "trsvcid": "4420", 00:36:54.278 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:36:54.278 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:36:54.278 "prchk_reftag": false, 00:36:54.278 "prchk_guard": false, 00:36:54.278 "hdgst": false, 00:36:54.278 "ddgst": false, 00:36:54.278 "dhchap_key": "key2", 00:36:54.278 "allow_unrecognized_csi": false, 00:36:54.278 "method": "bdev_nvme_attach_controller", 00:36:54.278 "req_id": 1 00:36:54.278 } 00:36:54.278 Got JSON-RPC error response 00:36:54.278 response: 00:36:54.278 { 00:36:54.278 "code": -5, 00:36:54.278 "message": "Input/output error" 00:36:54.278 } 00:36:54.278 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:36:54.278 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:54.278 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:54.278 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:54.278 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:36:54.278 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.278 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:54.278 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.278 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:54.278 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.278 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:54.278 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.278 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:54.278 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:36:54.278 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:54.278 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:36:54.278 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:54.278 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:36:54.278 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:54.278 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:54.278 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:54.278 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:54.845 request: 00:36:54.845 { 00:36:54.845 "name": "nvme0", 00:36:54.845 "trtype": "tcp", 00:36:54.845 "traddr": "10.0.0.3", 00:36:54.845 "adrfam": "ipv4", 00:36:54.845 "trsvcid": "4420", 00:36:54.845 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:36:54.845 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:36:54.845 "prchk_reftag": false, 00:36:54.845 "prchk_guard": false, 00:36:54.845 "hdgst": false, 00:36:54.845 "ddgst": false, 00:36:54.845 "dhchap_key": "key1", 00:36:54.845 "dhchap_ctrlr_key": "ckey2", 00:36:54.845 "allow_unrecognized_csi": false, 00:36:54.845 "method": "bdev_nvme_attach_controller", 00:36:54.845 "req_id": 1 00:36:54.845 } 00:36:54.845 Got JSON-RPC error response 00:36:54.845 response: 00:36:54.845 { 00:36:54.845 "code": -5, 00:36:54.845 "message": "Input/output error" 00:36:54.845 } 00:36:54.845 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:36:54.845 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:54.845 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:54.845 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:54.845 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:36:54.845 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.845 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:54.845 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.846 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key1 00:36:54.846 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.846 13:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:54.846 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.846 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:54.846 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:36:54.846 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:54.846 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:36:54.846 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:54.846 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:36:54.846 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:54.846 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:54.846 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:54.846 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:55.414 request: 00:36:55.414 { 00:36:55.414 "name": "nvme0", 00:36:55.414 "trtype": "tcp", 00:36:55.414 "traddr": "10.0.0.3", 00:36:55.414 "adrfam": "ipv4", 00:36:55.414 "trsvcid": "4420", 00:36:55.414 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:36:55.414 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:36:55.414 "prchk_reftag": false, 00:36:55.414 "prchk_guard": false, 00:36:55.414 "hdgst": false, 00:36:55.414 "ddgst": false, 00:36:55.414 "dhchap_key": "key1", 00:36:55.414 "dhchap_ctrlr_key": "ckey1", 00:36:55.414 "allow_unrecognized_csi": false, 00:36:55.414 "method": "bdev_nvme_attach_controller", 00:36:55.414 "req_id": 1 00:36:55.414 } 00:36:55.414 Got JSON-RPC error response 00:36:55.414 response: 00:36:55.414 { 00:36:55.414 "code": -5, 00:36:55.414 "message": "Input/output error" 00:36:55.414 } 00:36:55.414 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:36:55.414 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:55.414 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:55.414 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:55.414 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:36:55.414 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:55.414 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:55.414 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:55.414 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 67476 00:36:55.414 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 67476 ']' 00:36:55.414 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 67476 00:36:55.414 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:36:55.414 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:55.414 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67476 00:36:55.414 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:55.414 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:55.414 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67476' 00:36:55.414 killing process with pid 67476 00:36:55.414 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 67476 00:36:55.414 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 67476 00:36:55.673 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:36:55.673 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:55.673 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:55.673 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:55.673 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=70249 00:36:55.673 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 70249 00:36:55.673 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 70249 ']' 00:36:55.673 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:55.673 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:55.673 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:55.673 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:36:55.673 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:55.673 13:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:56.608 13:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:56.608 13:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:36:56.608 13:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:56.608 13:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:56.608 13:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:56.866 13:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:56.866 13:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:36:56.866 13:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 70249 00:36:56.866 13:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 70249 ']' 00:36:56.866 13:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:56.866 13:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:56.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:56.866 13:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:56.866 13:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:56.866 13:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:56.866 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:56.866 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:36:56.866 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:36:56.866 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:56.866 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:57.128 null0 00:36:57.128 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.128 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:36:57.128 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.VA4 00:36:57.128 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.128 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:57.128 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.128 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.LZ8 ]] 00:36:57.128 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.LZ8 00:36:57.128 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.128 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:57.128 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.128 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:36:57.128 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.OQ0 00:36:57.128 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.128 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:57.128 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.128 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.cZV ]] 00:36:57.128 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.cZV 00:36:57.128 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.128 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:57.128 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.128 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:36:57.128 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Ju5 00:36:57.128 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.128 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:57.128 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.128 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.IF4 ]] 00:36:57.128 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.IF4 00:36:57.128 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.128 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:57.129 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.129 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:36:57.129 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Jfl 00:36:57.129 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.129 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:57.129 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.129 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:36:57.129 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:36:57.129 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:36:57.129 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:36:57.129 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:36:57.129 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:36:57.129 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:36:57.129 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key3 00:36:57.129 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.129 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:57.129 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.129 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:36:57.129 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:36:57.129 13:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:36:58.066 nvme0n1 00:36:58.066 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:36:58.066 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:36:58.066 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:36:58.324 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:58.324 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:36:58.324 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.324 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:58.324 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.324 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:36:58.324 { 00:36:58.324 "cntlid": 1, 00:36:58.324 "qid": 0, 00:36:58.324 "state": "enabled", 00:36:58.324 "thread": "nvmf_tgt_poll_group_000", 00:36:58.324 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:36:58.324 "listen_address": { 00:36:58.324 "trtype": "TCP", 00:36:58.324 "adrfam": "IPv4", 00:36:58.324 "traddr": "10.0.0.3", 00:36:58.324 "trsvcid": "4420" 00:36:58.324 }, 00:36:58.324 "peer_address": { 00:36:58.324 "trtype": "TCP", 00:36:58.324 "adrfam": "IPv4", 00:36:58.324 "traddr": "10.0.0.1", 00:36:58.324 "trsvcid": "52866" 00:36:58.324 }, 00:36:58.324 "auth": { 00:36:58.324 "state": "completed", 00:36:58.324 "digest": "sha512", 00:36:58.324 "dhgroup": "ffdhe8192" 00:36:58.324 } 00:36:58.324 } 00:36:58.324 ]' 00:36:58.324 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:36:58.324 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:36:58.324 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:36:58.324 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:36:58.324 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:36:58.583 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:36:58.583 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:36:58.583 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:36:58.583 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Nzc5MWYwYmYwNzgxNjQxNzkyMTBkNGU2OGU1ZTU0OWM5MmQxMmJiNjc5NzMyODczNTEwZjQ3YzVhYjJiZWQxOUnX21U=: 00:36:58.583 13:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:03:Nzc5MWYwYmYwNzgxNjQxNzkyMTBkNGU2OGU1ZTU0OWM5MmQxMmJiNjc5NzMyODczNTEwZjQ3YzVhYjJiZWQxOUnX21U=: 00:36:59.518 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:36:59.518 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:36:59.518 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:36:59.518 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.518 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:59.518 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.518 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key3 00:36:59.518 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.518 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:36:59.518 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.518 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:36:59.518 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:36:59.518 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:36:59.518 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:36:59.518 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:36:59.518 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:36:59.518 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:59.518 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:36:59.518 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:59.518 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:36:59.518 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:36:59.518 13:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:36:59.777 request: 00:36:59.777 { 00:36:59.777 "name": "nvme0", 00:36:59.777 "trtype": "tcp", 00:36:59.777 "traddr": "10.0.0.3", 00:36:59.777 "adrfam": "ipv4", 00:36:59.777 "trsvcid": "4420", 00:36:59.777 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:36:59.777 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:36:59.777 "prchk_reftag": false, 00:36:59.777 "prchk_guard": false, 00:36:59.777 "hdgst": false, 00:36:59.777 "ddgst": false, 00:36:59.777 "dhchap_key": "key3", 00:36:59.777 "allow_unrecognized_csi": false, 00:36:59.777 "method": "bdev_nvme_attach_controller", 00:36:59.777 "req_id": 1 00:36:59.777 } 00:36:59.777 Got JSON-RPC error response 00:36:59.777 response: 00:36:59.777 { 00:36:59.777 "code": -5, 00:36:59.777 "message": "Input/output error" 00:36:59.777 } 00:36:59.777 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:36:59.777 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:59.777 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:59.777 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:59.777 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:36:59.777 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:36:59.777 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:36:59.777 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:37:00.035 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:37:00.035 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:37:00.035 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:37:00.035 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:37:00.035 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:00.035 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:37:00.035 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:00.035 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:37:00.035 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:37:00.035 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:37:00.293 request: 00:37:00.293 { 00:37:00.293 "name": "nvme0", 00:37:00.293 "trtype": "tcp", 00:37:00.293 "traddr": "10.0.0.3", 00:37:00.293 "adrfam": "ipv4", 00:37:00.293 "trsvcid": "4420", 00:37:00.293 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:37:00.293 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:37:00.293 "prchk_reftag": false, 00:37:00.293 "prchk_guard": false, 00:37:00.293 "hdgst": false, 00:37:00.293 "ddgst": false, 00:37:00.293 "dhchap_key": "key3", 00:37:00.293 "allow_unrecognized_csi": false, 00:37:00.293 "method": "bdev_nvme_attach_controller", 00:37:00.293 "req_id": 1 00:37:00.293 } 00:37:00.293 Got JSON-RPC error response 00:37:00.293 response: 00:37:00.293 { 00:37:00.293 "code": -5, 00:37:00.293 "message": "Input/output error" 00:37:00.293 } 00:37:00.293 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:37:00.293 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:00.293 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:00.293 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:00.293 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:37:00.293 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:37:00.293 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:37:00.293 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:37:00.293 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:37:00.293 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:37:00.551 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:37:00.551 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:00.551 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:37:00.551 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:00.551 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:37:00.551 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:00.551 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:37:00.551 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:00.551 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:37:00.551 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:37:00.551 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:37:00.551 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:37:00.551 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:00.551 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:37:00.551 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:00.551 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:37:00.551 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:37:00.551 13:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:37:01.115 request: 00:37:01.115 { 00:37:01.115 "name": "nvme0", 00:37:01.115 "trtype": "tcp", 00:37:01.115 "traddr": "10.0.0.3", 00:37:01.115 "adrfam": "ipv4", 00:37:01.115 "trsvcid": "4420", 00:37:01.115 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:37:01.115 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:37:01.115 "prchk_reftag": false, 00:37:01.115 "prchk_guard": false, 00:37:01.115 "hdgst": false, 00:37:01.115 "ddgst": false, 00:37:01.115 "dhchap_key": "key0", 00:37:01.115 "dhchap_ctrlr_key": "key1", 00:37:01.115 "allow_unrecognized_csi": false, 00:37:01.115 "method": "bdev_nvme_attach_controller", 00:37:01.115 "req_id": 1 00:37:01.115 } 00:37:01.115 Got JSON-RPC error response 00:37:01.115 response: 00:37:01.115 { 00:37:01.115 "code": -5, 00:37:01.115 "message": "Input/output error" 00:37:01.115 } 00:37:01.115 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:37:01.115 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:01.115 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:01.115 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:01.115 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:37:01.115 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:37:01.115 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:37:01.372 nvme0n1 00:37:01.372 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:37:01.372 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:37:01.372 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:37:01.372 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:01.372 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:37:01.372 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:37:01.630 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key1 00:37:01.630 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:01.630 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:37:01.630 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:01.630 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:37:01.630 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:37:01.630 13:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:37:02.559 nvme0n1 00:37:02.559 13:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:37:02.559 13:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:37:02.560 13:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:37:02.817 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:02.817 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key2 --dhchap-ctrlr-key key3 00:37:02.817 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.817 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:37:02.817 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.817 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:37:02.817 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:37:02.817 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:37:03.073 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:03.073 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmZkYzlhNjM4ZDAzN2FlYjUzYjg0YzEwNDZlODkzNGNjZWRiZjk3ZDZkYWZhOGM3YSt+tw==: --dhchap-ctrl-secret DHHC-1:03:Nzc5MWYwYmYwNzgxNjQxNzkyMTBkNGU2OGU1ZTU0OWM5MmQxMmJiNjc5NzMyODczNTEwZjQ3YzVhYjJiZWQxOUnX21U=: 00:37:03.073 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid 105ec898-1662-46bd-85be-b241e399edb9 -l 0 --dhchap-secret DHHC-1:02:ZmZkYzlhNjM4ZDAzN2FlYjUzYjg0YzEwNDZlODkzNGNjZWRiZjk3ZDZkYWZhOGM3YSt+tw==: --dhchap-ctrl-secret DHHC-1:03:Nzc5MWYwYmYwNzgxNjQxNzkyMTBkNGU2OGU1ZTU0OWM5MmQxMmJiNjc5NzMyODczNTEwZjQ3YzVhYjJiZWQxOUnX21U=: 00:37:04.004 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:37:04.004 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:37:04.004 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:37:04.004 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:37:04.004 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:37:04.004 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:37:04.004 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:37:04.004 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:37:04.004 13:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:37:04.004 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:37:04.004 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:37:04.004 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:37:04.004 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:37:04.004 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:04.004 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:37:04.004 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:04.004 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:37:04.004 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:37:04.004 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:37:04.571 request: 00:37:04.571 { 00:37:04.571 "name": "nvme0", 00:37:04.571 "trtype": "tcp", 00:37:04.571 "traddr": "10.0.0.3", 00:37:04.571 "adrfam": "ipv4", 00:37:04.571 "trsvcid": "4420", 00:37:04.571 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:37:04.571 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9", 00:37:04.571 "prchk_reftag": false, 00:37:04.571 "prchk_guard": false, 00:37:04.571 "hdgst": false, 00:37:04.571 "ddgst": false, 00:37:04.571 "dhchap_key": "key1", 00:37:04.571 "allow_unrecognized_csi": false, 00:37:04.571 "method": "bdev_nvme_attach_controller", 00:37:04.571 "req_id": 1 00:37:04.571 } 00:37:04.571 Got JSON-RPC error response 00:37:04.571 response: 00:37:04.571 { 00:37:04.571 "code": -5, 00:37:04.571 "message": "Input/output error" 00:37:04.571 } 00:37:04.571 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:37:04.571 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:04.571 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:04.571 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:04.571 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:37:04.571 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:37:04.571 13:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:37:05.507 nvme0n1 00:37:05.507 13:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:37:05.507 13:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:37:05.507 13:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:37:05.765 13:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:05.765 13:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:37:05.765 13:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:37:06.023 13:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:37:06.023 13:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:06.023 13:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:37:06.023 13:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:06.023 13:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:37:06.023 13:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:37:06.024 13:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:37:06.281 nvme0n1 00:37:06.281 13:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:37:06.281 13:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:37:06.281 13:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:37:06.539 13:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:06.539 13:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:37:06.539 13:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:37:06.797 13:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key1 --dhchap-ctrlr-key key3 00:37:06.797 13:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:06.797 13:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:37:06.797 13:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:06.797 13:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YmFhMjQ2OTI0ZGE4NGFhODZlMTVhMTkyZjlkZWRiNWJI/dgb: '' 2s 00:37:06.797 13:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:37:06.797 13:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:37:06.797 13:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YmFhMjQ2OTI0ZGE4NGFhODZlMTVhMTkyZjlkZWRiNWJI/dgb: 00:37:06.797 13:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:37:06.797 13:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:37:06.797 13:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:37:06.797 13:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YmFhMjQ2OTI0ZGE4NGFhODZlMTVhMTkyZjlkZWRiNWJI/dgb: ]] 00:37:06.797 13:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YmFhMjQ2OTI0ZGE4NGFhODZlMTVhMTkyZjlkZWRiNWJI/dgb: 00:37:06.797 13:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:37:06.797 13:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:37:06.797 13:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:37:08.699 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:37:08.699 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:37:08.699 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:37:08.699 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:37:08.699 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:37:08.699 13:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:37:08.699 13:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:37:08.699 13:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key1 --dhchap-ctrlr-key key2 00:37:08.699 13:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:08.699 13:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:37:08.699 13:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:08.699 13:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZmZkYzlhNjM4ZDAzN2FlYjUzYjg0YzEwNDZlODkzNGNjZWRiZjk3ZDZkYWZhOGM3YSt+tw==: 2s 00:37:08.699 13:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:37:08.699 13:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:37:08.699 13:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:37:08.699 13:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZmZkYzlhNjM4ZDAzN2FlYjUzYjg0YzEwNDZlODkzNGNjZWRiZjk3ZDZkYWZhOGM3YSt+tw==: 00:37:08.699 13:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:37:08.699 13:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:37:08.699 13:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:37:08.699 13:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZmZkYzlhNjM4ZDAzN2FlYjUzYjg0YzEwNDZlODkzNGNjZWRiZjk3ZDZkYWZhOGM3YSt+tw==: ]] 00:37:08.699 13:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZmZkYzlhNjM4ZDAzN2FlYjUzYjg0YzEwNDZlODkzNGNjZWRiZjk3ZDZkYWZhOGM3YSt+tw==: 00:37:08.957 13:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:37:08.957 13:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:37:10.944 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:37:10.944 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:37:10.944 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:37:10.944 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:37:10.944 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:37:10.944 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:37:10.944 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:37:10.944 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:37:10.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:37:10.944 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key0 --dhchap-ctrlr-key key1 00:37:10.944 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.944 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:37:10.944 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.944 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:37:10.944 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:37:10.944 13:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:37:11.881 nvme0n1 00:37:11.881 13:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key2 --dhchap-ctrlr-key key3 00:37:11.881 13:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:11.882 13:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:37:11.882 13:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:11.882 13:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:37:11.882 13:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:37:12.461 13:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:37:12.461 13:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:37:12.461 13:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:37:12.720 13:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:12.720 13:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:37:12.720 13:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:12.720 13:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:37:12.720 13:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:12.720 13:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:37:12.720 13:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:37:12.979 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:37:12.979 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:37:12.979 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:37:12.979 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:12.979 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key2 --dhchap-ctrlr-key key3 00:37:12.979 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:12.979 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:37:12.979 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:12.979 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:37:12.979 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:37:12.979 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:37:12.979 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:37:12.980 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:12.980 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:37:12.980 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:12.980 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:37:12.980 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:37:13.548 request: 00:37:13.548 { 00:37:13.548 "name": "nvme0", 00:37:13.548 "dhchap_key": "key1", 00:37:13.548 "dhchap_ctrlr_key": "key3", 00:37:13.548 "method": "bdev_nvme_set_keys", 00:37:13.548 "req_id": 1 00:37:13.548 } 00:37:13.548 Got JSON-RPC error response 00:37:13.548 response: 00:37:13.548 { 00:37:13.548 "code": -13, 00:37:13.548 "message": "Permission denied" 00:37:13.548 } 00:37:13.548 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:37:13.548 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:13.548 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:13.548 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:13.807 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:37:13.807 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:37:13.807 13:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:37:13.807 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:37:13.807 13:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:37:15.185 13:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:37:15.185 13:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:37:15.185 13:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:37:15.185 13:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:37:15.185 13:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key0 --dhchap-ctrlr-key key1 00:37:15.185 13:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:15.185 13:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:37:15.185 13:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:15.185 13:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:37:15.185 13:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:37:15.185 13:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:37:16.125 nvme0n1 00:37:16.125 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --dhchap-key key2 --dhchap-ctrlr-key key3 00:37:16.125 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:16.125 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:37:16.125 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:16.125 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:37:16.125 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:37:16.125 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:37:16.125 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:37:16.125 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:16.125 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:37:16.125 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:16.125 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:37:16.125 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:37:16.695 request: 00:37:16.695 { 00:37:16.695 "name": "nvme0", 00:37:16.695 "dhchap_key": "key2", 00:37:16.695 "dhchap_ctrlr_key": "key0", 00:37:16.695 "method": "bdev_nvme_set_keys", 00:37:16.695 "req_id": 1 00:37:16.695 } 00:37:16.695 Got JSON-RPC error response 00:37:16.695 response: 00:37:16.695 { 00:37:16.695 "code": -13, 00:37:16.695 "message": "Permission denied" 00:37:16.695 } 00:37:16.695 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:37:16.695 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:16.695 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:16.695 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:16.695 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:37:16.695 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:37:16.695 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:37:16.695 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:37:16.695 13:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:37:18.081 13:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:37:18.081 13:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:37:18.081 13:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:37:18.081 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:37:18.081 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:37:18.081 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:37:18.081 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 67508 00:37:18.081 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 67508 ']' 00:37:18.081 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 67508 00:37:18.081 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:37:18.081 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:18.081 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67508 00:37:18.081 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:18.081 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:18.081 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67508' 00:37:18.081 killing process with pid 67508 00:37:18.081 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 67508 00:37:18.081 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 67508 00:37:18.653 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:37:18.653 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:18.653 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:37:18.653 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:18.653 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:37:18.653 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:18.653 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:18.653 rmmod nvme_tcp 00:37:18.653 rmmod nvme_fabrics 00:37:18.653 rmmod nvme_keyring 00:37:18.653 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:18.653 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:37:18.653 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:37:18.653 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 70249 ']' 00:37:18.653 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 70249 00:37:18.653 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 70249 ']' 00:37:18.653 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 70249 00:37:18.653 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:37:18.653 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:18.653 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70249 00:37:18.653 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:18.653 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:18.653 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70249' 00:37:18.653 killing process with pid 70249 00:37:18.653 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 70249 00:37:18.653 13:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 70249 00:37:18.913 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:18.913 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:18.913 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:18.913 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:37:18.913 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:37:18.913 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:37:18.913 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:18.913 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:18.913 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:37:18.913 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:37:18.913 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:37:18.913 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:37:18.913 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:37:18.913 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:37:19.172 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:37:19.172 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:37:19.172 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:37:19.172 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:37:19.172 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:37:19.172 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:37:19.172 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:37:19.172 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:37:19.172 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:37:19.172 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:19.172 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:19.172 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:19.172 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:37:19.172 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.VA4 /tmp/spdk.key-sha256.OQ0 /tmp/spdk.key-sha384.Ju5 /tmp/spdk.key-sha512.Jfl /tmp/spdk.key-sha512.LZ8 /tmp/spdk.key-sha384.cZV /tmp/spdk.key-sha256.IF4 '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:37:19.172 00:37:19.172 real 2m39.384s 00:37:19.172 user 6m13.719s 00:37:19.172 sys 0m26.506s 00:37:19.172 ************************************ 00:37:19.172 END TEST nvmf_auth_target 00:37:19.172 ************************************ 00:37:19.172 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:19.172 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:37:19.432 13:58:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:37:19.432 13:58:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:37:19.432 13:58:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:19.432 13:58:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:19.432 13:58:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:37:19.432 ************************************ 00:37:19.432 START TEST nvmf_bdevio_no_huge 00:37:19.432 ************************************ 00:37:19.432 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:37:19.432 * Looking for test storage... 00:37:19.432 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:37:19.432 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:19.432 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:37:19.432 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:19.432 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:19.432 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:19.432 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:19.432 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:19.432 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:37:19.432 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:37:19.432 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:37:19.432 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:37:19.432 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:37:19.432 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:37:19.432 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:37:19.432 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:19.432 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:37:19.432 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:37:19.432 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:19.432 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:19.432 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:37:19.432 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:37:19.432 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:19.432 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:37:19.432 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:37:19.432 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:37:19.432 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:37:19.432 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:19.432 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:37:19.432 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:37:19.432 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:19.432 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:19.432 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:37:19.432 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:19.432 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:19.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:19.432 --rc genhtml_branch_coverage=1 00:37:19.433 --rc genhtml_function_coverage=1 00:37:19.433 --rc genhtml_legend=1 00:37:19.433 --rc geninfo_all_blocks=1 00:37:19.433 --rc geninfo_unexecuted_blocks=1 00:37:19.433 00:37:19.433 ' 00:37:19.433 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:19.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:19.433 --rc genhtml_branch_coverage=1 00:37:19.433 --rc genhtml_function_coverage=1 00:37:19.433 --rc genhtml_legend=1 00:37:19.433 --rc geninfo_all_blocks=1 00:37:19.433 --rc geninfo_unexecuted_blocks=1 00:37:19.433 00:37:19.433 ' 00:37:19.433 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:19.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:19.433 --rc genhtml_branch_coverage=1 00:37:19.433 --rc genhtml_function_coverage=1 00:37:19.433 --rc genhtml_legend=1 00:37:19.433 --rc geninfo_all_blocks=1 00:37:19.433 --rc geninfo_unexecuted_blocks=1 00:37:19.433 00:37:19.433 ' 00:37:19.433 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:19.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:19.433 --rc genhtml_branch_coverage=1 00:37:19.433 --rc genhtml_function_coverage=1 00:37:19.433 --rc genhtml_legend=1 00:37:19.433 --rc geninfo_all_blocks=1 00:37:19.433 --rc geninfo_unexecuted_blocks=1 00:37:19.433 00:37:19.433 ' 00:37:19.433 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:37:19.433 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:37:19.693 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:19.693 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:19.693 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:19.693 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:19.693 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:19.693 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:19.693 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:19.693 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:19.693 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:19.693 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:19.693 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:37:19.693 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=105ec898-1662-46bd-85be-b241e399edb9 00:37:19.693 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:19.693 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:19.693 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:37:19.693 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:19.693 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:37:19.693 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:37:19.693 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:19.693 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:19.693 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:19.694 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:37:19.694 Cannot find device "nvmf_init_br" 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:37:19.694 Cannot find device "nvmf_init_br2" 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:37:19.694 Cannot find device "nvmf_tgt_br" 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:37:19.694 Cannot find device "nvmf_tgt_br2" 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:37:19.694 Cannot find device "nvmf_init_br" 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:37:19.694 Cannot find device "nvmf_init_br2" 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:37:19.694 Cannot find device "nvmf_tgt_br" 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:37:19.694 Cannot find device "nvmf_tgt_br2" 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:37:19.694 Cannot find device "nvmf_br" 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:37:19.694 Cannot find device "nvmf_init_if" 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:37:19.694 Cannot find device "nvmf_init_if2" 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:37:19.694 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:37:19.694 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:37:19.694 13:58:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:37:19.694 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:37:19.694 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:37:19.955 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:37:19.955 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:37:19.955 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:37:19.955 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:37:19.955 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:37:19.955 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:37:19.955 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:37:19.955 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:37:19.955 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:37:19.955 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:37:19.955 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:37:19.955 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:37:19.955 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:37:19.955 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:37:19.955 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:37:19.955 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:37:19.955 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:37:19.955 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:37:19.955 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:37:19.955 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:37:19.955 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:37:19.955 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:37:19.955 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:37:19.955 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:37:19.955 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:37:19.955 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:37:19.955 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:37:19.955 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:37:19.955 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:37:19.955 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:37:19.955 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:37:19.955 00:37:19.955 --- 10.0.0.3 ping statistics --- 00:37:19.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:19.955 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:37:19.955 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:37:19.955 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:37:19.955 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:37:19.955 00:37:19.955 --- 10.0.0.4 ping statistics --- 00:37:19.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:19.955 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:37:19.955 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:37:19.955 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:19.955 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:37:19.955 00:37:19.955 --- 10.0.0.1 ping statistics --- 00:37:19.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:19.955 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:37:19.955 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:37:19.955 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:19.955 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:37:19.955 00:37:19.955 --- 10.0.0.2 ping statistics --- 00:37:19.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:19.955 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:37:19.955 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:19.955 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:37:19.955 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:19.955 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:19.955 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:19.955 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:19.955 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:19.955 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:19.955 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:19.955 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:37:19.955 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:19.955 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:19.955 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:37:19.955 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=70876 00:37:19.955 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:37:19.955 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 70876 00:37:19.955 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 70876 ']' 00:37:19.955 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:19.955 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:19.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:19.955 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:19.955 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:19.955 13:58:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:37:20.215 [2024-11-20 13:58:17.286467] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:37:20.215 [2024-11-20 13:58:17.286532] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:37:20.215 [2024-11-20 13:58:17.436930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:20.215 [2024-11-20 13:58:17.497256] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:20.215 [2024-11-20 13:58:17.497310] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:20.215 [2024-11-20 13:58:17.497318] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:20.215 [2024-11-20 13:58:17.497324] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:20.215 [2024-11-20 13:58:17.497330] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:20.215 [2024-11-20 13:58:17.497869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:37:20.215 [2024-11-20 13:58:17.498077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:37:20.215 [2024-11-20 13:58:17.498272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:20.215 [2024-11-20 13:58:17.498272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:37:20.215 [2024-11-20 13:58:17.502775] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:37:21.153 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:21.153 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:37:21.153 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:21.153 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:21.153 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:37:21.153 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:21.153 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:21.153 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:21.153 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:37:21.153 [2024-11-20 13:58:18.240419] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:21.153 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:21.153 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:21.153 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:21.153 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:37:21.153 Malloc0 00:37:21.153 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:21.153 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:21.153 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:21.153 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:37:21.153 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:21.153 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:21.153 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:21.153 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:37:21.153 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:21.153 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:37:21.153 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:21.153 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:37:21.153 [2024-11-20 13:58:18.284595] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:37:21.153 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:21.153 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:37:21.153 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:37:21.153 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:37:21.153 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:37:21.153 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:21.153 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:21.153 { 00:37:21.153 "params": { 00:37:21.153 "name": "Nvme$subsystem", 00:37:21.153 "trtype": "$TEST_TRANSPORT", 00:37:21.153 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:21.153 "adrfam": "ipv4", 00:37:21.153 "trsvcid": "$NVMF_PORT", 00:37:21.153 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:21.153 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:21.153 "hdgst": ${hdgst:-false}, 00:37:21.153 "ddgst": ${ddgst:-false} 00:37:21.153 }, 00:37:21.153 "method": "bdev_nvme_attach_controller" 00:37:21.153 } 00:37:21.153 EOF 00:37:21.153 )") 00:37:21.153 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:37:21.153 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:37:21.153 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:37:21.153 13:58:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:21.153 "params": { 00:37:21.153 "name": "Nvme1", 00:37:21.153 "trtype": "tcp", 00:37:21.153 "traddr": "10.0.0.3", 00:37:21.153 "adrfam": "ipv4", 00:37:21.153 "trsvcid": "4420", 00:37:21.153 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:21.153 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:21.153 "hdgst": false, 00:37:21.153 "ddgst": false 00:37:21.153 }, 00:37:21.153 "method": "bdev_nvme_attach_controller" 00:37:21.153 }' 00:37:21.153 [2024-11-20 13:58:18.343882] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:37:21.153 [2024-11-20 13:58:18.344253] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid70912 ] 00:37:21.413 [2024-11-20 13:58:18.500079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:21.413 [2024-11-20 13:58:18.563782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:21.413 [2024-11-20 13:58:18.563972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:21.413 [2024-11-20 13:58:18.563976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:21.413 [2024-11-20 13:58:18.576972] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:37:21.673 I/O targets: 00:37:21.673 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:37:21.673 00:37:21.673 00:37:21.673 CUnit - A unit testing framework for C - Version 2.1-3 00:37:21.673 http://cunit.sourceforge.net/ 00:37:21.673 00:37:21.673 00:37:21.673 Suite: bdevio tests on: Nvme1n1 00:37:21.673 Test: blockdev write read block ...passed 00:37:21.673 Test: blockdev write zeroes read block ...passed 00:37:21.673 Test: blockdev write zeroes read no split ...passed 00:37:21.673 Test: blockdev write zeroes read split ...passed 00:37:21.673 Test: blockdev write zeroes read split partial ...passed 00:37:21.673 Test: blockdev reset ...[2024-11-20 13:58:18.790914] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:37:21.673 [2024-11-20 13:58:18.791009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbaf310 (9): Bad file descriptor 00:37:21.673 passed 00:37:21.673 Test: blockdev write read 8 blocks ...[2024-11-20 13:58:18.811471] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:37:21.673 passed 00:37:21.673 Test: blockdev write read size > 128k ...passed 00:37:21.673 Test: blockdev write read invalid size ...passed 00:37:21.673 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:37:21.673 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:37:21.673 Test: blockdev write read max offset ...passed 00:37:21.673 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:37:21.673 Test: blockdev writev readv 8 blocks ...passed 00:37:21.673 Test: blockdev writev readv 30 x 1block ...passed 00:37:21.673 Test: blockdev writev readv block ...passed 00:37:21.673 Test: blockdev writev readv size > 128k ...passed 00:37:21.673 Test: blockdev writev readv size > 128k in two iovs ...passed 00:37:21.673 Test: blockdev comparev and writev ...[2024-11-20 13:58:18.819009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:21.673 [2024-11-20 13:58:18.819052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:21.673 [2024-11-20 13:58:18.819068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:21.673 [2024-11-20 13:58:18.819076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:21.673 passed 00:37:21.673 Test: blockdev nvme passthru rw ...passed 00:37:21.673 Test: blockdev nvme passthru vendor specific ...[2024-11-20 13:58:18.819431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:21.673 [2024-11-20 13:58:18.819447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:37:21.673 [2024-11-20 13:58:18.819459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:21.673 [2024-11-20 13:58:18.819466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:37:21.673 [2024-11-20 13:58:18.819838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:21.673 [2024-11-20 13:58:18.819849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:37:21.673 [2024-11-20 13:58:18.819860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:21.673 [2024-11-20 13:58:18.819868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:37:21.673 [2024-11-20 13:58:18.820223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:21.673 [2024-11-20 13:58:18.820234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:37:21.673 [2024-11-20 13:58:18.820246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:21.673 [2024-11-20 13:58:18.820254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:37:21.673 [2024-11-20 13:58:18.820988] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:37:21.673 [2024-11-20 13:58:18.821008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:37:21.673 [2024-11-20 13:58:18.821123] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:37:21.673 [2024-11-20 13:58:18.821137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:37:21.673 passed 00:37:21.673 Test: blockdev nvme admin passthru ...[2024-11-20 13:58:18.821234] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:37:21.673 [2024-11-20 13:58:18.821247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:21.673 [2024-11-20 13:58:18.821341] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:37:21.673 [2024-11-20 13:58:18.821350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:21.673 passed 00:37:21.673 Test: blockdev copy ...passed 00:37:21.673 00:37:21.673 Run Summary: Type Total Ran Passed Failed Inactive 00:37:21.673 suites 1 1 n/a 0 0 00:37:21.673 tests 23 23 23 0 0 00:37:21.673 asserts 152 152 152 0 n/a 00:37:21.673 00:37:21.673 Elapsed time = 0.179 seconds 00:37:21.933 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:21.933 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:21.933 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:37:21.933 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:21.933 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:37:21.933 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:37:21.933 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:21.933 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:37:21.933 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:21.933 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:37:21.933 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:21.933 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:22.192 rmmod nvme_tcp 00:37:22.192 rmmod nvme_fabrics 00:37:22.192 rmmod nvme_keyring 00:37:22.192 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:22.192 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:37:22.192 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:37:22.192 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 70876 ']' 00:37:22.192 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 70876 00:37:22.192 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 70876 ']' 00:37:22.192 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 70876 00:37:22.192 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:37:22.192 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:22.192 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70876 00:37:22.192 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:37:22.192 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:37:22.192 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70876' 00:37:22.192 killing process with pid 70876 00:37:22.192 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 70876 00:37:22.192 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 70876 00:37:22.452 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:22.452 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:22.452 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:22.452 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:37:22.452 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:37:22.452 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:22.452 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:37:22.452 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:22.452 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:37:22.452 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:37:22.452 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:37:22.452 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:37:22.712 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:37:22.712 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:37:22.712 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:37:22.712 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:37:22.712 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:37:22.712 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:37:22.712 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:37:22.712 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:37:22.712 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:37:22.712 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:37:22.712 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:37:22.712 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:22.712 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:22.712 13:58:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:22.712 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:37:22.712 00:37:22.712 real 0m3.498s 00:37:22.712 user 0m10.124s 00:37:22.712 sys 0m1.459s 00:37:22.712 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:22.712 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:37:22.712 ************************************ 00:37:22.712 END TEST nvmf_bdevio_no_huge 00:37:22.712 ************************************ 00:37:22.972 13:58:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:37:22.972 13:58:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:22.972 13:58:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:22.972 13:58:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:37:22.972 ************************************ 00:37:22.972 START TEST nvmf_tls 00:37:22.972 ************************************ 00:37:22.972 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:37:22.972 * Looking for test storage... 00:37:22.972 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:37:22.972 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:22.972 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:37:22.972 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:22.972 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:22.972 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:22.972 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:22.972 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:22.972 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:37:22.972 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:37:22.972 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:37:22.972 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:37:22.972 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:37:23.233 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:37:23.233 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:37:23.233 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:23.233 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:37:23.233 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:37:23.233 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:23.233 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:23.233 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:37:23.233 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:37:23.233 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:23.233 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:37:23.233 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:37:23.233 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:37:23.233 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:37:23.233 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:23.233 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:37:23.233 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:37:23.233 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:23.233 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:23.233 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:37:23.233 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:23.233 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:23.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:23.233 --rc genhtml_branch_coverage=1 00:37:23.233 --rc genhtml_function_coverage=1 00:37:23.233 --rc genhtml_legend=1 00:37:23.233 --rc geninfo_all_blocks=1 00:37:23.233 --rc geninfo_unexecuted_blocks=1 00:37:23.233 00:37:23.233 ' 00:37:23.233 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:23.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:23.233 --rc genhtml_branch_coverage=1 00:37:23.233 --rc genhtml_function_coverage=1 00:37:23.233 --rc genhtml_legend=1 00:37:23.233 --rc geninfo_all_blocks=1 00:37:23.233 --rc geninfo_unexecuted_blocks=1 00:37:23.233 00:37:23.233 ' 00:37:23.233 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:23.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:23.233 --rc genhtml_branch_coverage=1 00:37:23.233 --rc genhtml_function_coverage=1 00:37:23.233 --rc genhtml_legend=1 00:37:23.233 --rc geninfo_all_blocks=1 00:37:23.233 --rc geninfo_unexecuted_blocks=1 00:37:23.233 00:37:23.233 ' 00:37:23.233 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:23.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:23.233 --rc genhtml_branch_coverage=1 00:37:23.233 --rc genhtml_function_coverage=1 00:37:23.233 --rc genhtml_legend=1 00:37:23.233 --rc geninfo_all_blocks=1 00:37:23.233 --rc geninfo_unexecuted_blocks=1 00:37:23.233 00:37:23.233 ' 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=105ec898-1662-46bd-85be-b241e399edb9 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:23.234 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:37:23.234 Cannot find device "nvmf_init_br" 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:37:23.234 Cannot find device "nvmf_init_br2" 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:37:23.234 Cannot find device "nvmf_tgt_br" 00:37:23.234 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:37:23.235 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:37:23.235 Cannot find device "nvmf_tgt_br2" 00:37:23.235 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:37:23.235 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:37:23.235 Cannot find device "nvmf_init_br" 00:37:23.235 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:37:23.235 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:37:23.235 Cannot find device "nvmf_init_br2" 00:37:23.235 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:37:23.235 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:37:23.235 Cannot find device "nvmf_tgt_br" 00:37:23.235 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:37:23.235 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:37:23.235 Cannot find device "nvmf_tgt_br2" 00:37:23.235 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:37:23.235 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:37:23.235 Cannot find device "nvmf_br" 00:37:23.235 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:37:23.235 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:37:23.235 Cannot find device "nvmf_init_if" 00:37:23.235 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:37:23.235 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:37:23.235 Cannot find device "nvmf_init_if2" 00:37:23.235 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:37:23.235 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:37:23.495 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:37:23.495 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:37:23.495 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:37:23.495 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:37:23.495 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:37:23.495 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:37:23.495 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:37:23.495 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:37:23.495 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:37:23.495 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:37:23.495 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:37:23.495 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:37:23.495 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:37:23.495 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:37:23.495 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:37:23.495 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:37:23.495 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:37:23.495 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:37:23.495 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:37:23.495 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:37:23.495 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:37:23.495 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:37:23.495 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:37:23.495 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:37:23.495 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:37:23.495 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:37:23.495 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:37:23.495 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:37:23.495 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:37:23.495 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:37:23.495 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:37:23.495 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:37:23.495 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:37:23.495 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:37:23.495 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:37:23.495 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:37:23.495 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:37:23.495 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:37:23.495 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:37:23.495 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.142 ms 00:37:23.495 00:37:23.495 --- 10.0.0.3 ping statistics --- 00:37:23.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:23.495 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:37:23.495 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:37:23.495 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:37:23.495 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.067 ms 00:37:23.495 00:37:23.495 --- 10.0.0.4 ping statistics --- 00:37:23.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:23.495 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:37:23.495 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:37:23.755 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:23.755 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.056 ms 00:37:23.755 00:37:23.755 --- 10.0.0.1 ping statistics --- 00:37:23.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:23.755 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:37:23.755 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:37:23.755 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:23.755 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:37:23.755 00:37:23.755 --- 10.0.0.2 ping statistics --- 00:37:23.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:23.755 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:37:23.755 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:23.755 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:37:23.755 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:23.755 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:23.755 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:23.755 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:23.755 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:23.755 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:23.755 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:23.755 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:37:23.755 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:23.755 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:23.755 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:37:23.755 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71148 00:37:23.755 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:37:23.755 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71148 00:37:23.755 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71148 ']' 00:37:23.755 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:23.755 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:23.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:23.755 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:23.755 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:23.755 13:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:37:23.755 [2024-11-20 13:58:20.939291] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:37:23.755 [2024-11-20 13:58:20.939353] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:24.014 [2024-11-20 13:58:21.092989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:24.014 [2024-11-20 13:58:21.154218] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:24.014 [2024-11-20 13:58:21.154266] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:24.014 [2024-11-20 13:58:21.154272] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:24.014 [2024-11-20 13:58:21.154277] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:24.014 [2024-11-20 13:58:21.154282] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:24.014 [2024-11-20 13:58:21.154584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:24.583 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:24.583 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:37:24.583 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:24.583 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:24.583 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:37:24.583 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:24.583 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:37:24.583 13:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:37:24.843 true 00:37:24.843 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:37:24.843 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:37:25.103 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:37:25.103 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:37:25.103 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:37:25.362 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:37:25.362 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:37:25.622 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:37:25.622 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:37:25.622 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:37:25.880 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:37:25.880 13:58:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:37:25.880 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:37:25.880 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:37:26.140 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:37:26.140 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:37:26.140 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:37:26.140 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:37:26.140 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:37:26.399 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:37:26.399 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:37:26.658 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:37:26.658 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:37:26.658 13:58:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:37:26.917 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:37:26.917 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:37:27.176 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:37:27.176 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:37:27.176 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:37:27.176 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:37:27.176 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:37:27.176 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:37:27.176 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:37:27.176 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:37:27.176 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:37:27.176 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:37:27.176 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:37:27.176 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:37:27.176 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:37:27.176 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:37:27.176 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:37:27.176 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:37:27.176 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:37:27.176 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:37:27.176 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:37:27.176 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.tMddQRRkIc 00:37:27.176 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:37:27.176 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.VbRkT9VRNJ 00:37:27.176 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:37:27.176 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:37:27.176 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.tMddQRRkIc 00:37:27.176 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.VbRkT9VRNJ 00:37:27.176 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:37:27.435 13:58:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:37:27.694 [2024-11-20 13:58:24.941478] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:37:27.694 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.tMddQRRkIc 00:37:27.694 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.tMddQRRkIc 00:37:27.694 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:37:27.953 [2024-11-20 13:58:25.211798] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:27.953 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:37:28.212 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:37:28.470 [2024-11-20 13:58:25.694994] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:28.470 [2024-11-20 13:58:25.695231] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:37:28.470 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:37:28.729 malloc0 00:37:28.729 13:58:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:37:28.988 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.tMddQRRkIc 00:37:29.248 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:37:29.516 13:58:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.tMddQRRkIc 00:37:39.542 Initializing NVMe Controllers 00:37:39.542 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:37:39.542 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:37:39.542 Initialization complete. Launching workers. 00:37:39.542 ======================================================== 00:37:39.542 Latency(us) 00:37:39.542 Device Information : IOPS MiB/s Average min max 00:37:39.542 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14632.29 57.16 4374.31 959.07 6086.21 00:37:39.542 ======================================================== 00:37:39.542 Total : 14632.29 57.16 4374.31 959.07 6086.21 00:37:39.542 00:37:39.542 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.tMddQRRkIc 00:37:39.542 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:37:39.542 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:37:39.542 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:37:39.542 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.tMddQRRkIc 00:37:39.542 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:37:39.542 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71375 00:37:39.542 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:37:39.542 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:37:39.542 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71375 /var/tmp/bdevperf.sock 00:37:39.542 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71375 ']' 00:37:39.542 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:39.542 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:39.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:39.542 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:39.542 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:39.542 13:58:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:37:39.801 [2024-11-20 13:58:36.906532] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:37:39.801 [2024-11-20 13:58:36.906598] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71375 ] 00:37:39.801 [2024-11-20 13:58:37.053505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:39.801 [2024-11-20 13:58:37.106285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:40.061 [2024-11-20 13:58:37.147642] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:37:40.631 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:40.631 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:37:40.631 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.tMddQRRkIc 00:37:40.631 13:58:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:37:40.892 [2024-11-20 13:58:38.086437] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:40.892 TLSTESTn1 00:37:40.892 13:58:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:37:41.152 Running I/O for 10 seconds... 00:37:43.083 5564.00 IOPS, 21.73 MiB/s [2024-11-20T13:58:41.345Z] 5462.00 IOPS, 21.34 MiB/s [2024-11-20T13:58:42.726Z] 5427.00 IOPS, 21.20 MiB/s [2024-11-20T13:58:43.296Z] 5433.00 IOPS, 21.22 MiB/s [2024-11-20T13:58:44.678Z] 5488.20 IOPS, 21.44 MiB/s [2024-11-20T13:58:45.618Z] 5478.00 IOPS, 21.40 MiB/s [2024-11-20T13:58:46.557Z] 5475.29 IOPS, 21.39 MiB/s [2024-11-20T13:58:47.497Z] 5426.62 IOPS, 21.20 MiB/s [2024-11-20T13:58:48.442Z] 5424.56 IOPS, 21.19 MiB/s [2024-11-20T13:58:48.442Z] 5353.40 IOPS, 20.91 MiB/s 00:37:51.119 Latency(us) 00:37:51.119 [2024-11-20T13:58:48.442Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:51.119 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:37:51.119 Verification LBA range: start 0x0 length 0x2000 00:37:51.119 TLSTESTn1 : 10.02 5354.84 20.92 0.00 0.00 23858.68 5494.72 40065.68 00:37:51.119 [2024-11-20T13:58:48.442Z] =================================================================================================================== 00:37:51.119 [2024-11-20T13:58:48.442Z] Total : 5354.84 20.92 0.00 0.00 23858.68 5494.72 40065.68 00:37:51.119 { 00:37:51.119 "results": [ 00:37:51.119 { 00:37:51.119 "job": "TLSTESTn1", 00:37:51.119 "core_mask": "0x4", 00:37:51.119 "workload": "verify", 00:37:51.119 "status": "finished", 00:37:51.119 "verify_range": { 00:37:51.119 "start": 0, 00:37:51.119 "length": 8192 00:37:51.119 }, 00:37:51.119 "queue_depth": 128, 00:37:51.119 "io_size": 4096, 00:37:51.119 "runtime": 10.021209, 00:37:51.119 "iops": 5354.842913664409, 00:37:51.119 "mibps": 20.917355131501598, 00:37:51.119 "io_failed": 0, 00:37:51.119 "io_timeout": 0, 00:37:51.119 "avg_latency_us": 23858.683975112537, 00:37:51.119 "min_latency_us": 5494.721397379913, 00:37:51.119 "max_latency_us": 40065.676855895195 00:37:51.119 } 00:37:51.119 ], 00:37:51.119 "core_count": 1 00:37:51.119 } 00:37:51.119 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:51.119 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71375 00:37:51.119 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71375 ']' 00:37:51.119 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71375 00:37:51.119 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:37:51.119 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:51.119 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71375 00:37:51.119 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:37:51.119 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:37:51.119 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71375' 00:37:51.119 killing process with pid 71375 00:37:51.119 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71375 00:37:51.119 Received shutdown signal, test time was about 10.000000 seconds 00:37:51.119 00:37:51.119 Latency(us) 00:37:51.119 [2024-11-20T13:58:48.442Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:51.119 [2024-11-20T13:58:48.442Z] =================================================================================================================== 00:37:51.119 [2024-11-20T13:58:48.442Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:51.119 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71375 00:37:51.387 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.VbRkT9VRNJ 00:37:51.387 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:37:51.387 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.VbRkT9VRNJ 00:37:51.387 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:37:51.387 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:51.387 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:37:51.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:51.387 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:51.387 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.VbRkT9VRNJ 00:37:51.387 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:37:51.387 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:37:51.387 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:37:51.387 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.VbRkT9VRNJ 00:37:51.387 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:37:51.387 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71515 00:37:51.387 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:37:51.387 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71515 /var/tmp/bdevperf.sock 00:37:51.387 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71515 ']' 00:37:51.387 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:51.387 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:51.387 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:51.387 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:51.387 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:37:51.387 13:58:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:37:51.387 [2024-11-20 13:58:48.597320] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:37:51.387 [2024-11-20 13:58:48.597399] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71515 ] 00:37:51.645 [2024-11-20 13:58:48.739989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:51.645 [2024-11-20 13:58:48.795551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:51.645 [2024-11-20 13:58:48.837738] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:37:52.214 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:52.214 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:37:52.214 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.VbRkT9VRNJ 00:37:52.472 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:37:52.731 [2024-11-20 13:58:49.916743] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:52.731 [2024-11-20 13:58:49.921769] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:52.731 [2024-11-20 13:58:49.922328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19f6fb0 (107): Transport endpoint is not connected 00:37:52.731 [2024-11-20 13:58:49.923312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19f6fb0 (9): Bad file descriptor 00:37:52.731 [2024-11-20 13:58:49.924307] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:37:52.731 [2024-11-20 13:58:49.924328] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:37:52.731 [2024-11-20 13:58:49.924335] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:37:52.731 [2024-11-20 13:58:49.924347] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:37:52.731 request: 00:37:52.731 { 00:37:52.731 "name": "TLSTEST", 00:37:52.731 "trtype": "tcp", 00:37:52.731 "traddr": "10.0.0.3", 00:37:52.731 "adrfam": "ipv4", 00:37:52.731 "trsvcid": "4420", 00:37:52.731 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:52.731 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:52.731 "prchk_reftag": false, 00:37:52.731 "prchk_guard": false, 00:37:52.731 "hdgst": false, 00:37:52.731 "ddgst": false, 00:37:52.731 "psk": "key0", 00:37:52.731 "allow_unrecognized_csi": false, 00:37:52.731 "method": "bdev_nvme_attach_controller", 00:37:52.731 "req_id": 1 00:37:52.731 } 00:37:52.731 Got JSON-RPC error response 00:37:52.731 response: 00:37:52.731 { 00:37:52.731 "code": -5, 00:37:52.731 "message": "Input/output error" 00:37:52.731 } 00:37:52.731 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71515 00:37:52.731 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71515 ']' 00:37:52.731 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71515 00:37:52.731 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:37:52.731 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:52.731 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71515 00:37:52.731 killing process with pid 71515 00:37:52.731 Received shutdown signal, test time was about 10.000000 seconds 00:37:52.731 00:37:52.731 Latency(us) 00:37:52.731 [2024-11-20T13:58:50.054Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:52.731 [2024-11-20T13:58:50.054Z] =================================================================================================================== 00:37:52.731 [2024-11-20T13:58:50.054Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:52.731 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:37:52.731 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:37:52.731 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71515' 00:37:52.731 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71515 00:37:52.731 13:58:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71515 00:37:52.990 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:37:52.990 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:37:52.990 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:52.990 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:52.990 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:52.990 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.tMddQRRkIc 00:37:52.990 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:37:52.990 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.tMddQRRkIc 00:37:52.990 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:37:52.990 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:52.990 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:37:52.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:52.990 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:52.990 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.tMddQRRkIc 00:37:52.990 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:37:52.990 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:37:52.990 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:37:52.990 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.tMddQRRkIc 00:37:52.990 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:37:52.990 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71538 00:37:52.990 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:37:52.990 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71538 /var/tmp/bdevperf.sock 00:37:52.990 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71538 ']' 00:37:52.990 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:52.990 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:52.990 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:52.990 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:52.990 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:37:52.990 13:58:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:37:52.990 [2024-11-20 13:58:50.211983] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:37:52.990 [2024-11-20 13:58:50.212059] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71538 ] 00:37:53.249 [2024-11-20 13:58:50.359278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:53.249 [2024-11-20 13:58:50.411994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:53.249 [2024-11-20 13:58:50.454184] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:37:53.817 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:53.817 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:37:53.817 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.tMddQRRkIc 00:37:54.077 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:37:54.337 [2024-11-20 13:58:51.529696] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:54.337 [2024-11-20 13:58:51.538395] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:37:54.337 [2024-11-20 13:58:51.538443] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:37:54.337 [2024-11-20 13:58:51.538507] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:54.337 [2024-11-20 13:58:51.539235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c18fb0 (107): Transport endpoint is not connected 00:37:54.337 [2024-11-20 13:58:51.540219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c18fb0 (9): Bad file descriptor 00:37:54.337 [2024-11-20 13:58:51.541216] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:37:54.337 [2024-11-20 13:58:51.541233] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:37:54.337 [2024-11-20 13:58:51.541241] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:37:54.337 [2024-11-20 13:58:51.541253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:37:54.337 request: 00:37:54.337 { 00:37:54.337 "name": "TLSTEST", 00:37:54.337 "trtype": "tcp", 00:37:54.337 "traddr": "10.0.0.3", 00:37:54.337 "adrfam": "ipv4", 00:37:54.337 "trsvcid": "4420", 00:37:54.337 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:54.337 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:37:54.337 "prchk_reftag": false, 00:37:54.337 "prchk_guard": false, 00:37:54.337 "hdgst": false, 00:37:54.337 "ddgst": false, 00:37:54.337 "psk": "key0", 00:37:54.337 "allow_unrecognized_csi": false, 00:37:54.337 "method": "bdev_nvme_attach_controller", 00:37:54.337 "req_id": 1 00:37:54.337 } 00:37:54.337 Got JSON-RPC error response 00:37:54.337 response: 00:37:54.337 { 00:37:54.337 "code": -5, 00:37:54.337 "message": "Input/output error" 00:37:54.337 } 00:37:54.337 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71538 00:37:54.337 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71538 ']' 00:37:54.337 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71538 00:37:54.337 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:37:54.337 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:54.337 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71538 00:37:54.337 killing process with pid 71538 00:37:54.337 Received shutdown signal, test time was about 10.000000 seconds 00:37:54.337 00:37:54.337 Latency(us) 00:37:54.337 [2024-11-20T13:58:51.660Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:54.337 [2024-11-20T13:58:51.660Z] =================================================================================================================== 00:37:54.337 [2024-11-20T13:58:51.660Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:54.337 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:37:54.337 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:37:54.337 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71538' 00:37:54.337 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71538 00:37:54.337 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71538 00:37:54.597 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:37:54.597 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:37:54.597 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:54.597 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:54.597 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:54.597 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.tMddQRRkIc 00:37:54.597 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:37:54.597 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.tMddQRRkIc 00:37:54.597 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:37:54.597 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:54.597 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:37:54.597 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:54.597 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.tMddQRRkIc 00:37:54.597 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:37:54.597 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:37:54.597 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:37:54.597 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.tMddQRRkIc 00:37:54.597 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:37:54.597 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71572 00:37:54.597 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:37:54.598 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:37:54.598 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71572 /var/tmp/bdevperf.sock 00:37:54.598 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71572 ']' 00:37:54.598 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:54.598 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:54.598 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:54.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:54.598 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:54.598 13:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:37:54.598 [2024-11-20 13:58:51.819854] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:37:54.598 [2024-11-20 13:58:51.819965] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71572 ] 00:37:54.857 [2024-11-20 13:58:51.978742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:54.857 [2024-11-20 13:58:52.031016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:54.857 [2024-11-20 13:58:52.073406] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:37:55.426 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:55.426 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:37:55.426 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.tMddQRRkIc 00:37:55.685 13:58:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:37:55.945 [2024-11-20 13:58:53.076964] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:55.945 [2024-11-20 13:58:53.085628] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:37:55.945 [2024-11-20 13:58:53.085676] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:37:55.945 [2024-11-20 13:58:53.085750] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:55.945 [2024-11-20 13:58:53.086339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d6fb0 (107): Transport endpoint is not connected 00:37:55.945 [2024-11-20 13:58:53.087327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d6fb0 (9): Bad file descriptor 00:37:55.945 [2024-11-20 13:58:53.088323] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:37:55.945 [2024-11-20 13:58:53.088342] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:37:55.945 [2024-11-20 13:58:53.088349] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:37:55.945 [2024-11-20 13:58:53.088360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:37:55.945 request: 00:37:55.945 { 00:37:55.945 "name": "TLSTEST", 00:37:55.945 "trtype": "tcp", 00:37:55.945 "traddr": "10.0.0.3", 00:37:55.945 "adrfam": "ipv4", 00:37:55.945 "trsvcid": "4420", 00:37:55.945 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:37:55.945 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:55.945 "prchk_reftag": false, 00:37:55.945 "prchk_guard": false, 00:37:55.945 "hdgst": false, 00:37:55.945 "ddgst": false, 00:37:55.945 "psk": "key0", 00:37:55.945 "allow_unrecognized_csi": false, 00:37:55.945 "method": "bdev_nvme_attach_controller", 00:37:55.945 "req_id": 1 00:37:55.945 } 00:37:55.945 Got JSON-RPC error response 00:37:55.945 response: 00:37:55.945 { 00:37:55.945 "code": -5, 00:37:55.945 "message": "Input/output error" 00:37:55.945 } 00:37:55.945 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71572 00:37:55.945 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71572 ']' 00:37:55.945 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71572 00:37:55.945 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:37:55.945 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:55.945 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71572 00:37:55.945 killing process with pid 71572 00:37:55.945 Received shutdown signal, test time was about 10.000000 seconds 00:37:55.945 00:37:55.945 Latency(us) 00:37:55.945 [2024-11-20T13:58:53.268Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:55.945 [2024-11-20T13:58:53.268Z] =================================================================================================================== 00:37:55.945 [2024-11-20T13:58:53.268Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:55.945 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:37:55.945 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:37:55.945 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71572' 00:37:55.945 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71572 00:37:55.945 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71572 00:37:56.204 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:37:56.204 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:37:56.204 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:56.204 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:56.204 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:56.204 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:37:56.204 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:37:56.204 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:37:56.204 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:37:56.204 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:56.204 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:37:56.204 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:56.204 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:37:56.204 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:37:56.204 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:37:56.204 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:37:56.204 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:37:56.204 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:37:56.204 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71595 00:37:56.204 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:37:56.204 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:37:56.204 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71595 /var/tmp/bdevperf.sock 00:37:56.204 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71595 ']' 00:37:56.204 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:56.204 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:56.204 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:56.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:56.204 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:56.204 13:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:37:56.204 [2024-11-20 13:58:53.384891] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:37:56.204 [2024-11-20 13:58:53.385407] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71595 ] 00:37:56.463 [2024-11-20 13:58:53.537553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:56.463 [2024-11-20 13:58:53.592155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:56.463 [2024-11-20 13:58:53.634151] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:37:57.031 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:57.031 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:37:57.031 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:37:57.291 [2024-11-20 13:58:54.477050] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:37:57.291 [2024-11-20 13:58:54.477098] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:37:57.291 request: 00:37:57.291 { 00:37:57.291 "name": "key0", 00:37:57.291 "path": "", 00:37:57.291 "method": "keyring_file_add_key", 00:37:57.291 "req_id": 1 00:37:57.291 } 00:37:57.291 Got JSON-RPC error response 00:37:57.291 response: 00:37:57.291 { 00:37:57.291 "code": -1, 00:37:57.291 "message": "Operation not permitted" 00:37:57.291 } 00:37:57.291 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:37:57.550 [2024-11-20 13:58:54.684832] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:57.550 [2024-11-20 13:58:54.684893] bdev_nvme.c:6717:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:37:57.550 request: 00:37:57.550 { 00:37:57.550 "name": "TLSTEST", 00:37:57.550 "trtype": "tcp", 00:37:57.550 "traddr": "10.0.0.3", 00:37:57.550 "adrfam": "ipv4", 00:37:57.550 "trsvcid": "4420", 00:37:57.550 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:57.550 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:57.550 "prchk_reftag": false, 00:37:57.550 "prchk_guard": false, 00:37:57.550 "hdgst": false, 00:37:57.550 "ddgst": false, 00:37:57.550 "psk": "key0", 00:37:57.550 "allow_unrecognized_csi": false, 00:37:57.550 "method": "bdev_nvme_attach_controller", 00:37:57.550 "req_id": 1 00:37:57.550 } 00:37:57.550 Got JSON-RPC error response 00:37:57.550 response: 00:37:57.550 { 00:37:57.550 "code": -126, 00:37:57.550 "message": "Required key not available" 00:37:57.550 } 00:37:57.550 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71595 00:37:57.550 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71595 ']' 00:37:57.550 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71595 00:37:57.550 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:37:57.550 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:57.550 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71595 00:37:57.550 killing process with pid 71595 00:37:57.550 Received shutdown signal, test time was about 10.000000 seconds 00:37:57.550 00:37:57.550 Latency(us) 00:37:57.550 [2024-11-20T13:58:54.873Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:57.550 [2024-11-20T13:58:54.873Z] =================================================================================================================== 00:37:57.550 [2024-11-20T13:58:54.873Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:57.550 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:37:57.550 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:37:57.550 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71595' 00:37:57.550 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71595 00:37:57.550 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71595 00:37:57.810 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:37:57.810 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:37:57.810 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:57.810 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:57.810 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:57.810 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 71148 00:37:57.810 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71148 ']' 00:37:57.810 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71148 00:37:57.810 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:37:57.810 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:57.810 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71148 00:37:57.810 killing process with pid 71148 00:37:57.810 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:57.810 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:57.810 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71148' 00:37:57.810 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71148 00:37:57.810 13:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71148 00:37:58.069 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:37:58.069 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:37:58.069 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:37:58.070 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:37:58.070 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:37:58.070 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:37:58.070 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:37:58.070 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:37:58.070 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:37:58.070 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.XijMJyf6Ks 00:37:58.070 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:37:58.070 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.XijMJyf6Ks 00:37:58.070 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:37:58.070 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:58.070 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:58.070 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:37:58.070 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71639 00:37:58.070 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:37:58.070 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71639 00:37:58.070 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71639 ']' 00:37:58.070 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:58.070 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:58.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:58.070 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:58.070 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:58.070 13:58:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:37:58.070 [2024-11-20 13:58:55.341800] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:37:58.070 [2024-11-20 13:58:55.341908] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:58.330 [2024-11-20 13:58:55.498943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:58.330 [2024-11-20 13:58:55.551786] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:58.330 [2024-11-20 13:58:55.551835] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:58.330 [2024-11-20 13:58:55.551841] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:58.330 [2024-11-20 13:58:55.551845] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:58.330 [2024-11-20 13:58:55.551849] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:58.330 [2024-11-20 13:58:55.552112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:58.330 [2024-11-20 13:58:55.601153] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:37:58.921 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:58.921 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:37:58.921 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:58.921 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:58.921 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:37:59.197 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:59.197 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.XijMJyf6Ks 00:37:59.197 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.XijMJyf6Ks 00:37:59.197 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:37:59.197 [2024-11-20 13:58:56.482012] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:59.197 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:37:59.456 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:37:59.717 [2024-11-20 13:58:56.885294] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:59.717 [2024-11-20 13:58:56.885501] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:37:59.717 13:58:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:37:59.999 malloc0 00:37:59.999 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:37:59.999 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.XijMJyf6Ks 00:38:00.259 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:38:00.519 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XijMJyf6Ks 00:38:00.519 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:38:00.519 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:38:00.519 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:38:00.519 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.XijMJyf6Ks 00:38:00.519 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:38:00.519 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71695 00:38:00.519 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:38:00.519 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:38:00.519 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71695 /var/tmp/bdevperf.sock 00:38:00.519 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71695 ']' 00:38:00.519 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:00.519 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:00.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:00.519 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:00.519 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:00.519 13:58:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:38:00.519 [2024-11-20 13:58:57.782880] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:38:00.519 [2024-11-20 13:58:57.782969] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71695 ] 00:38:00.778 [2024-11-20 13:58:57.933165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:00.778 [2024-11-20 13:58:57.992023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:00.778 [2024-11-20 13:58:58.034076] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:01.715 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:01.715 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:38:01.715 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.XijMJyf6Ks 00:38:01.715 13:58:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:38:01.974 [2024-11-20 13:58:59.045515] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:01.974 TLSTESTn1 00:38:01.974 13:58:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:38:01.974 Running I/O for 10 seconds... 00:38:04.289 5898.00 IOPS, 23.04 MiB/s [2024-11-20T13:59:02.550Z] 5890.50 IOPS, 23.01 MiB/s [2024-11-20T13:59:03.488Z] 5896.33 IOPS, 23.03 MiB/s [2024-11-20T13:59:04.422Z] 5751.25 IOPS, 22.47 MiB/s [2024-11-20T13:59:05.357Z] 5632.60 IOPS, 22.00 MiB/s [2024-11-20T13:59:06.293Z] 5561.33 IOPS, 21.72 MiB/s [2024-11-20T13:59:07.673Z] 5544.29 IOPS, 21.66 MiB/s [2024-11-20T13:59:08.242Z] 5590.88 IOPS, 21.84 MiB/s [2024-11-20T13:59:09.623Z] 5628.67 IOPS, 21.99 MiB/s [2024-11-20T13:59:09.623Z] 5660.40 IOPS, 22.11 MiB/s 00:38:12.300 Latency(us) 00:38:12.300 [2024-11-20T13:59:09.623Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:12.300 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:38:12.300 Verification LBA range: start 0x0 length 0x2000 00:38:12.300 TLSTESTn1 : 10.01 5666.17 22.13 0.00 0.00 22555.45 4349.99 16369.69 00:38:12.300 [2024-11-20T13:59:09.623Z] =================================================================================================================== 00:38:12.300 [2024-11-20T13:59:09.623Z] Total : 5666.17 22.13 0.00 0.00 22555.45 4349.99 16369.69 00:38:12.300 { 00:38:12.300 "results": [ 00:38:12.300 { 00:38:12.300 "job": "TLSTESTn1", 00:38:12.300 "core_mask": "0x4", 00:38:12.300 "workload": "verify", 00:38:12.300 "status": "finished", 00:38:12.300 "verify_range": { 00:38:12.300 "start": 0, 00:38:12.300 "length": 8192 00:38:12.300 }, 00:38:12.300 "queue_depth": 128, 00:38:12.300 "io_size": 4096, 00:38:12.300 "runtime": 10.011707, 00:38:12.300 "iops": 5666.166618739442, 00:38:12.300 "mibps": 22.133463354450946, 00:38:12.300 "io_failed": 0, 00:38:12.300 "io_timeout": 0, 00:38:12.300 "avg_latency_us": 22555.447182140593, 00:38:12.300 "min_latency_us": 4349.987772925764, 00:38:12.300 "max_latency_us": 16369.690829694324 00:38:12.300 } 00:38:12.300 ], 00:38:12.300 "core_count": 1 00:38:12.300 } 00:38:12.300 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:12.300 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71695 00:38:12.300 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71695 ']' 00:38:12.300 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71695 00:38:12.300 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:38:12.300 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:12.300 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71695 00:38:12.300 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:38:12.300 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:38:12.300 killing process with pid 71695 00:38:12.300 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71695' 00:38:12.300 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71695 00:38:12.300 Received shutdown signal, test time was about 10.000000 seconds 00:38:12.300 00:38:12.300 Latency(us) 00:38:12.300 [2024-11-20T13:59:09.623Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:12.300 [2024-11-20T13:59:09.623Z] =================================================================================================================== 00:38:12.300 [2024-11-20T13:59:09.623Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:12.300 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71695 00:38:12.300 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.XijMJyf6Ks 00:38:12.300 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XijMJyf6Ks 00:38:12.300 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:38:12.300 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XijMJyf6Ks 00:38:12.300 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:38:12.300 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:12.300 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:38:12.300 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:12.300 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XijMJyf6Ks 00:38:12.300 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:38:12.300 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:38:12.300 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:38:12.300 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.XijMJyf6Ks 00:38:12.300 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:38:12.300 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71830 00:38:12.300 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:38:12.300 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:38:12.300 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71830 /var/tmp/bdevperf.sock 00:38:12.300 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71830 ']' 00:38:12.300 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:12.300 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:12.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:12.300 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:12.300 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:12.300 13:59:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:38:12.300 [2024-11-20 13:59:09.562355] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:38:12.300 [2024-11-20 13:59:09.562450] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71830 ] 00:38:12.560 [2024-11-20 13:59:09.694318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:12.560 [2024-11-20 13:59:09.753571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:12.560 [2024-11-20 13:59:09.795564] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:13.498 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:13.498 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:38:13.498 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.XijMJyf6Ks 00:38:13.498 [2024-11-20 13:59:10.654817] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.XijMJyf6Ks': 0100666 00:38:13.498 [2024-11-20 13:59:10.654857] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:38:13.498 request: 00:38:13.498 { 00:38:13.498 "name": "key0", 00:38:13.498 "path": "/tmp/tmp.XijMJyf6Ks", 00:38:13.498 "method": "keyring_file_add_key", 00:38:13.498 "req_id": 1 00:38:13.498 } 00:38:13.498 Got JSON-RPC error response 00:38:13.498 response: 00:38:13.498 { 00:38:13.498 "code": -1, 00:38:13.498 "message": "Operation not permitted" 00:38:13.498 } 00:38:13.498 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:38:13.758 [2024-11-20 13:59:10.870554] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:13.758 [2024-11-20 13:59:10.870601] bdev_nvme.c:6717:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:38:13.758 request: 00:38:13.758 { 00:38:13.758 "name": "TLSTEST", 00:38:13.758 "trtype": "tcp", 00:38:13.758 "traddr": "10.0.0.3", 00:38:13.758 "adrfam": "ipv4", 00:38:13.758 "trsvcid": "4420", 00:38:13.758 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:13.758 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:13.758 "prchk_reftag": false, 00:38:13.758 "prchk_guard": false, 00:38:13.758 "hdgst": false, 00:38:13.758 "ddgst": false, 00:38:13.758 "psk": "key0", 00:38:13.758 "allow_unrecognized_csi": false, 00:38:13.758 "method": "bdev_nvme_attach_controller", 00:38:13.758 "req_id": 1 00:38:13.758 } 00:38:13.758 Got JSON-RPC error response 00:38:13.758 response: 00:38:13.758 { 00:38:13.758 "code": -126, 00:38:13.758 "message": "Required key not available" 00:38:13.758 } 00:38:13.758 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71830 00:38:13.758 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71830 ']' 00:38:13.758 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71830 00:38:13.758 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:38:13.758 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:13.758 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71830 00:38:13.758 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:38:13.758 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:38:13.758 killing process with pid 71830 00:38:13.758 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71830' 00:38:13.758 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71830 00:38:13.758 Received shutdown signal, test time was about 10.000000 seconds 00:38:13.758 00:38:13.758 Latency(us) 00:38:13.758 [2024-11-20T13:59:11.081Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:13.758 [2024-11-20T13:59:11.081Z] =================================================================================================================== 00:38:13.758 [2024-11-20T13:59:11.081Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:38:13.758 13:59:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71830 00:38:14.018 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:38:14.018 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:38:14.018 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:14.018 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:14.018 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:14.018 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 71639 00:38:14.018 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71639 ']' 00:38:14.018 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71639 00:38:14.018 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:38:14.018 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:14.018 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71639 00:38:14.018 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:14.018 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:14.018 killing process with pid 71639 00:38:14.018 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71639' 00:38:14.018 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71639 00:38:14.018 13:59:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71639 00:38:16.581 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:38:16.581 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:16.581 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:16.581 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:38:16.581 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71887 00:38:16.581 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:38:16.581 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71887 00:38:16.581 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71887 ']' 00:38:16.581 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:16.581 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:16.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:16.581 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:16.581 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:16.581 13:59:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:38:16.581 [2024-11-20 13:59:13.508608] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:38:16.581 [2024-11-20 13:59:13.508678] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:16.581 [2024-11-20 13:59:13.658229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:16.581 [2024-11-20 13:59:13.720148] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:16.581 [2024-11-20 13:59:13.720191] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:16.581 [2024-11-20 13:59:13.720197] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:16.581 [2024-11-20 13:59:13.720201] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:16.581 [2024-11-20 13:59:13.720206] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:16.581 [2024-11-20 13:59:13.720530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:16.581 [2024-11-20 13:59:13.767188] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:17.152 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:17.152 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:38:17.152 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:17.152 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:17.152 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:38:17.152 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:17.152 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.XijMJyf6Ks 00:38:17.152 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:38:17.152 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.XijMJyf6Ks 00:38:17.152 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:38:17.152 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:17.152 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:38:17.152 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:17.152 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.XijMJyf6Ks 00:38:17.152 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.XijMJyf6Ks 00:38:17.152 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:38:17.412 [2024-11-20 13:59:14.648406] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:17.412 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:38:17.672 13:59:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:38:17.932 [2024-11-20 13:59:15.059667] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:17.932 [2024-11-20 13:59:15.059867] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:38:17.932 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:38:18.191 malloc0 00:38:18.191 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:38:18.191 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.XijMJyf6Ks 00:38:18.451 [2024-11-20 13:59:15.679248] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.XijMJyf6Ks': 0100666 00:38:18.451 [2024-11-20 13:59:15.679291] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:38:18.451 request: 00:38:18.451 { 00:38:18.451 "name": "key0", 00:38:18.451 "path": "/tmp/tmp.XijMJyf6Ks", 00:38:18.451 "method": "keyring_file_add_key", 00:38:18.451 "req_id": 1 00:38:18.451 } 00:38:18.451 Got JSON-RPC error response 00:38:18.451 response: 00:38:18.451 { 00:38:18.451 "code": -1, 00:38:18.451 "message": "Operation not permitted" 00:38:18.451 } 00:38:18.451 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:38:18.711 [2024-11-20 13:59:15.878914] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:38:18.711 [2024-11-20 13:59:15.878965] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:38:18.711 request: 00:38:18.711 { 00:38:18.711 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:38:18.711 "host": "nqn.2016-06.io.spdk:host1", 00:38:18.711 "psk": "key0", 00:38:18.711 "method": "nvmf_subsystem_add_host", 00:38:18.711 "req_id": 1 00:38:18.711 } 00:38:18.711 Got JSON-RPC error response 00:38:18.711 response: 00:38:18.711 { 00:38:18.711 "code": -32603, 00:38:18.711 "message": "Internal error" 00:38:18.711 } 00:38:18.711 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:38:18.711 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:18.711 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:18.711 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:18.711 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 71887 00:38:18.711 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71887 ']' 00:38:18.711 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71887 00:38:18.711 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:38:18.711 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:18.711 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71887 00:38:18.711 killing process with pid 71887 00:38:18.711 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:18.711 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:18.711 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71887' 00:38:18.711 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71887 00:38:18.711 13:59:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71887 00:38:18.972 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.XijMJyf6Ks 00:38:18.972 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:38:18.972 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:18.972 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:18.972 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:38:18.972 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:38:18.972 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71952 00:38:18.972 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71952 00:38:18.972 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71952 ']' 00:38:18.972 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:18.972 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:18.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:18.972 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:18.972 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:18.972 13:59:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:38:18.972 [2024-11-20 13:59:16.202321] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:38:18.972 [2024-11-20 13:59:16.202377] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:19.232 [2024-11-20 13:59:16.349217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:19.232 [2024-11-20 13:59:16.410019] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:19.232 [2024-11-20 13:59:16.410060] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:19.232 [2024-11-20 13:59:16.410066] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:19.232 [2024-11-20 13:59:16.410071] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:19.232 [2024-11-20 13:59:16.410075] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:19.232 [2024-11-20 13:59:16.410328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:19.232 [2024-11-20 13:59:16.458137] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:19.802 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:19.802 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:38:19.802 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:19.802 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:19.802 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:38:20.061 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:20.061 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.XijMJyf6Ks 00:38:20.061 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.XijMJyf6Ks 00:38:20.061 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:38:20.061 [2024-11-20 13:59:17.374578] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:20.321 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:38:20.321 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:38:20.580 [2024-11-20 13:59:17.805871] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:20.580 [2024-11-20 13:59:17.806084] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:38:20.580 13:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:38:20.839 malloc0 00:38:20.839 13:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:38:21.099 13:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.XijMJyf6Ks 00:38:21.358 13:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:38:21.618 13:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:38:21.618 13:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=72006 00:38:21.618 13:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:38:21.618 13:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 72006 /var/tmp/bdevperf.sock 00:38:21.618 13:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72006 ']' 00:38:21.618 13:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:21.618 13:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:21.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:21.619 13:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:21.619 13:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:21.619 13:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:38:21.619 [2024-11-20 13:59:18.775245] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:38:21.619 [2024-11-20 13:59:18.775312] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72006 ] 00:38:21.619 [2024-11-20 13:59:18.925420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:21.878 [2024-11-20 13:59:18.981716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:21.878 [2024-11-20 13:59:19.024186] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:22.446 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:22.446 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:38:22.446 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.XijMJyf6Ks 00:38:22.706 13:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:38:22.965 [2024-11-20 13:59:20.131876] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:22.965 TLSTESTn1 00:38:22.965 13:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:38:23.536 13:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:38:23.536 "subsystems": [ 00:38:23.536 { 00:38:23.536 "subsystem": "keyring", 00:38:23.536 "config": [ 00:38:23.536 { 00:38:23.536 "method": "keyring_file_add_key", 00:38:23.536 "params": { 00:38:23.536 "name": "key0", 00:38:23.536 "path": "/tmp/tmp.XijMJyf6Ks" 00:38:23.536 } 00:38:23.536 } 00:38:23.536 ] 00:38:23.536 }, 00:38:23.536 { 00:38:23.536 "subsystem": "iobuf", 00:38:23.536 "config": [ 00:38:23.536 { 00:38:23.536 "method": "iobuf_set_options", 00:38:23.536 "params": { 00:38:23.536 "small_pool_count": 8192, 00:38:23.536 "large_pool_count": 1024, 00:38:23.536 "small_bufsize": 8192, 00:38:23.536 "large_bufsize": 135168, 00:38:23.536 "enable_numa": false 00:38:23.536 } 00:38:23.536 } 00:38:23.536 ] 00:38:23.536 }, 00:38:23.536 { 00:38:23.536 "subsystem": "sock", 00:38:23.536 "config": [ 00:38:23.536 { 00:38:23.536 "method": "sock_set_default_impl", 00:38:23.536 "params": { 00:38:23.537 "impl_name": "uring" 00:38:23.537 } 00:38:23.537 }, 00:38:23.537 { 00:38:23.537 "method": "sock_impl_set_options", 00:38:23.537 "params": { 00:38:23.537 "impl_name": "ssl", 00:38:23.537 "recv_buf_size": 4096, 00:38:23.537 "send_buf_size": 4096, 00:38:23.537 "enable_recv_pipe": true, 00:38:23.537 "enable_quickack": false, 00:38:23.537 "enable_placement_id": 0, 00:38:23.537 "enable_zerocopy_send_server": true, 00:38:23.537 "enable_zerocopy_send_client": false, 00:38:23.537 "zerocopy_threshold": 0, 00:38:23.537 "tls_version": 0, 00:38:23.537 "enable_ktls": false 00:38:23.537 } 00:38:23.537 }, 00:38:23.537 { 00:38:23.537 "method": "sock_impl_set_options", 00:38:23.537 "params": { 00:38:23.537 "impl_name": "posix", 00:38:23.537 "recv_buf_size": 2097152, 00:38:23.537 "send_buf_size": 2097152, 00:38:23.537 "enable_recv_pipe": true, 00:38:23.537 "enable_quickack": false, 00:38:23.537 "enable_placement_id": 0, 00:38:23.537 "enable_zerocopy_send_server": true, 00:38:23.537 "enable_zerocopy_send_client": false, 00:38:23.537 "zerocopy_threshold": 0, 00:38:23.537 "tls_version": 0, 00:38:23.537 "enable_ktls": false 00:38:23.537 } 00:38:23.537 }, 00:38:23.537 { 00:38:23.537 "method": "sock_impl_set_options", 00:38:23.537 "params": { 00:38:23.537 "impl_name": "uring", 00:38:23.537 "recv_buf_size": 2097152, 00:38:23.537 "send_buf_size": 2097152, 00:38:23.537 "enable_recv_pipe": true, 00:38:23.537 "enable_quickack": false, 00:38:23.537 "enable_placement_id": 0, 00:38:23.537 "enable_zerocopy_send_server": false, 00:38:23.537 "enable_zerocopy_send_client": false, 00:38:23.537 "zerocopy_threshold": 0, 00:38:23.537 "tls_version": 0, 00:38:23.537 "enable_ktls": false 00:38:23.537 } 00:38:23.537 } 00:38:23.537 ] 00:38:23.537 }, 00:38:23.537 { 00:38:23.537 "subsystem": "vmd", 00:38:23.537 "config": [] 00:38:23.537 }, 00:38:23.537 { 00:38:23.537 "subsystem": "accel", 00:38:23.537 "config": [ 00:38:23.537 { 00:38:23.537 "method": "accel_set_options", 00:38:23.537 "params": { 00:38:23.537 "small_cache_size": 128, 00:38:23.537 "large_cache_size": 16, 00:38:23.537 "task_count": 2048, 00:38:23.537 "sequence_count": 2048, 00:38:23.537 "buf_count": 2048 00:38:23.537 } 00:38:23.537 } 00:38:23.537 ] 00:38:23.537 }, 00:38:23.537 { 00:38:23.537 "subsystem": "bdev", 00:38:23.537 "config": [ 00:38:23.537 { 00:38:23.537 "method": "bdev_set_options", 00:38:23.537 "params": { 00:38:23.537 "bdev_io_pool_size": 65535, 00:38:23.537 "bdev_io_cache_size": 256, 00:38:23.537 "bdev_auto_examine": true, 00:38:23.537 "iobuf_small_cache_size": 128, 00:38:23.537 "iobuf_large_cache_size": 16 00:38:23.537 } 00:38:23.537 }, 00:38:23.537 { 00:38:23.537 "method": "bdev_raid_set_options", 00:38:23.537 "params": { 00:38:23.537 "process_window_size_kb": 1024, 00:38:23.537 "process_max_bandwidth_mb_sec": 0 00:38:23.537 } 00:38:23.537 }, 00:38:23.537 { 00:38:23.537 "method": "bdev_iscsi_set_options", 00:38:23.537 "params": { 00:38:23.537 "timeout_sec": 30 00:38:23.537 } 00:38:23.537 }, 00:38:23.537 { 00:38:23.537 "method": "bdev_nvme_set_options", 00:38:23.537 "params": { 00:38:23.537 "action_on_timeout": "none", 00:38:23.537 "timeout_us": 0, 00:38:23.537 "timeout_admin_us": 0, 00:38:23.537 "keep_alive_timeout_ms": 10000, 00:38:23.537 "arbitration_burst": 0, 00:38:23.537 "low_priority_weight": 0, 00:38:23.537 "medium_priority_weight": 0, 00:38:23.537 "high_priority_weight": 0, 00:38:23.537 "nvme_adminq_poll_period_us": 10000, 00:38:23.537 "nvme_ioq_poll_period_us": 0, 00:38:23.537 "io_queue_requests": 0, 00:38:23.537 "delay_cmd_submit": true, 00:38:23.537 "transport_retry_count": 4, 00:38:23.537 "bdev_retry_count": 3, 00:38:23.537 "transport_ack_timeout": 0, 00:38:23.537 "ctrlr_loss_timeout_sec": 0, 00:38:23.537 "reconnect_delay_sec": 0, 00:38:23.537 "fast_io_fail_timeout_sec": 0, 00:38:23.537 "disable_auto_failback": false, 00:38:23.537 "generate_uuids": false, 00:38:23.537 "transport_tos": 0, 00:38:23.537 "nvme_error_stat": false, 00:38:23.537 "rdma_srq_size": 0, 00:38:23.537 "io_path_stat": false, 00:38:23.537 "allow_accel_sequence": false, 00:38:23.537 "rdma_max_cq_size": 0, 00:38:23.537 "rdma_cm_event_timeout_ms": 0, 00:38:23.537 "dhchap_digests": [ 00:38:23.537 "sha256", 00:38:23.537 "sha384", 00:38:23.537 "sha512" 00:38:23.537 ], 00:38:23.537 "dhchap_dhgroups": [ 00:38:23.537 "null", 00:38:23.537 "ffdhe2048", 00:38:23.537 "ffdhe3072", 00:38:23.537 "ffdhe4096", 00:38:23.537 "ffdhe6144", 00:38:23.537 "ffdhe8192" 00:38:23.537 ] 00:38:23.537 } 00:38:23.537 }, 00:38:23.537 { 00:38:23.537 "method": "bdev_nvme_set_hotplug", 00:38:23.537 "params": { 00:38:23.537 "period_us": 100000, 00:38:23.537 "enable": false 00:38:23.537 } 00:38:23.537 }, 00:38:23.537 { 00:38:23.537 "method": "bdev_malloc_create", 00:38:23.537 "params": { 00:38:23.537 "name": "malloc0", 00:38:23.537 "num_blocks": 8192, 00:38:23.537 "block_size": 4096, 00:38:23.537 "physical_block_size": 4096, 00:38:23.537 "uuid": "7e45eede-0a55-4cf2-a407-0838d3a6e988", 00:38:23.537 "optimal_io_boundary": 0, 00:38:23.537 "md_size": 0, 00:38:23.537 "dif_type": 0, 00:38:23.537 "dif_is_head_of_md": false, 00:38:23.537 "dif_pi_format": 0 00:38:23.537 } 00:38:23.537 }, 00:38:23.537 { 00:38:23.537 "method": "bdev_wait_for_examine" 00:38:23.537 } 00:38:23.537 ] 00:38:23.537 }, 00:38:23.537 { 00:38:23.537 "subsystem": "nbd", 00:38:23.537 "config": [] 00:38:23.537 }, 00:38:23.537 { 00:38:23.537 "subsystem": "scheduler", 00:38:23.537 "config": [ 00:38:23.537 { 00:38:23.537 "method": "framework_set_scheduler", 00:38:23.537 "params": { 00:38:23.537 "name": "static" 00:38:23.537 } 00:38:23.537 } 00:38:23.537 ] 00:38:23.537 }, 00:38:23.537 { 00:38:23.537 "subsystem": "nvmf", 00:38:23.537 "config": [ 00:38:23.537 { 00:38:23.537 "method": "nvmf_set_config", 00:38:23.537 "params": { 00:38:23.537 "discovery_filter": "match_any", 00:38:23.537 "admin_cmd_passthru": { 00:38:23.537 "identify_ctrlr": false 00:38:23.537 }, 00:38:23.537 "dhchap_digests": [ 00:38:23.537 "sha256", 00:38:23.537 "sha384", 00:38:23.537 "sha512" 00:38:23.537 ], 00:38:23.537 "dhchap_dhgroups": [ 00:38:23.537 "null", 00:38:23.537 "ffdhe2048", 00:38:23.537 "ffdhe3072", 00:38:23.537 "ffdhe4096", 00:38:23.537 "ffdhe6144", 00:38:23.537 "ffdhe8192" 00:38:23.537 ] 00:38:23.537 } 00:38:23.537 }, 00:38:23.537 { 00:38:23.537 "method": "nvmf_set_max_subsystems", 00:38:23.537 "params": { 00:38:23.537 "max_subsystems": 1024 00:38:23.537 } 00:38:23.537 }, 00:38:23.537 { 00:38:23.537 "method": "nvmf_set_crdt", 00:38:23.537 "params": { 00:38:23.537 "crdt1": 0, 00:38:23.537 "crdt2": 0, 00:38:23.537 "crdt3": 0 00:38:23.537 } 00:38:23.537 }, 00:38:23.537 { 00:38:23.537 "method": "nvmf_create_transport", 00:38:23.537 "params": { 00:38:23.537 "trtype": "TCP", 00:38:23.537 "max_queue_depth": 128, 00:38:23.537 "max_io_qpairs_per_ctrlr": 127, 00:38:23.537 "in_capsule_data_size": 4096, 00:38:23.537 "max_io_size": 131072, 00:38:23.537 "io_unit_size": 131072, 00:38:23.537 "max_aq_depth": 128, 00:38:23.537 "num_shared_buffers": 511, 00:38:23.537 "buf_cache_size": 4294967295, 00:38:23.537 "dif_insert_or_strip": false, 00:38:23.537 "zcopy": false, 00:38:23.538 "c2h_success": false, 00:38:23.538 "sock_priority": 0, 00:38:23.538 "abort_timeout_sec": 1, 00:38:23.538 "ack_timeout": 0, 00:38:23.538 "data_wr_pool_size": 0 00:38:23.538 } 00:38:23.538 }, 00:38:23.538 { 00:38:23.538 "method": "nvmf_create_subsystem", 00:38:23.538 "params": { 00:38:23.538 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:38:23.538 "allow_any_host": false, 00:38:23.538 "serial_number": "SPDK00000000000001", 00:38:23.538 "model_number": "SPDK bdev Controller", 00:38:23.538 "max_namespaces": 10, 00:38:23.538 "min_cntlid": 1, 00:38:23.538 "max_cntlid": 65519, 00:38:23.538 "ana_reporting": false 00:38:23.538 } 00:38:23.538 }, 00:38:23.538 { 00:38:23.538 "method": "nvmf_subsystem_add_host", 00:38:23.538 "params": { 00:38:23.538 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:38:23.538 "host": "nqn.2016-06.io.spdk:host1", 00:38:23.538 "psk": "key0" 00:38:23.538 } 00:38:23.538 }, 00:38:23.538 { 00:38:23.538 "method": "nvmf_subsystem_add_ns", 00:38:23.538 "params": { 00:38:23.538 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:38:23.538 "namespace": { 00:38:23.538 "nsid": 1, 00:38:23.538 "bdev_name": "malloc0", 00:38:23.538 "nguid": "7E45EEDE0A554CF2A4070838D3A6E988", 00:38:23.538 "uuid": "7e45eede-0a55-4cf2-a407-0838d3a6e988", 00:38:23.538 "no_auto_visible": false 00:38:23.538 } 00:38:23.538 } 00:38:23.538 }, 00:38:23.538 { 00:38:23.538 "method": "nvmf_subsystem_add_listener", 00:38:23.538 "params": { 00:38:23.538 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:38:23.538 "listen_address": { 00:38:23.538 "trtype": "TCP", 00:38:23.538 "adrfam": "IPv4", 00:38:23.538 "traddr": "10.0.0.3", 00:38:23.538 "trsvcid": "4420" 00:38:23.538 }, 00:38:23.538 "secure_channel": true 00:38:23.538 } 00:38:23.538 } 00:38:23.538 ] 00:38:23.538 } 00:38:23.538 ] 00:38:23.538 }' 00:38:23.538 13:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:38:23.538 13:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:38:23.538 "subsystems": [ 00:38:23.538 { 00:38:23.538 "subsystem": "keyring", 00:38:23.538 "config": [ 00:38:23.538 { 00:38:23.538 "method": "keyring_file_add_key", 00:38:23.538 "params": { 00:38:23.538 "name": "key0", 00:38:23.538 "path": "/tmp/tmp.XijMJyf6Ks" 00:38:23.538 } 00:38:23.538 } 00:38:23.538 ] 00:38:23.538 }, 00:38:23.538 { 00:38:23.538 "subsystem": "iobuf", 00:38:23.538 "config": [ 00:38:23.538 { 00:38:23.538 "method": "iobuf_set_options", 00:38:23.538 "params": { 00:38:23.538 "small_pool_count": 8192, 00:38:23.538 "large_pool_count": 1024, 00:38:23.538 "small_bufsize": 8192, 00:38:23.538 "large_bufsize": 135168, 00:38:23.538 "enable_numa": false 00:38:23.538 } 00:38:23.538 } 00:38:23.538 ] 00:38:23.538 }, 00:38:23.538 { 00:38:23.538 "subsystem": "sock", 00:38:23.538 "config": [ 00:38:23.538 { 00:38:23.538 "method": "sock_set_default_impl", 00:38:23.538 "params": { 00:38:23.538 "impl_name": "uring" 00:38:23.538 } 00:38:23.538 }, 00:38:23.538 { 00:38:23.538 "method": "sock_impl_set_options", 00:38:23.538 "params": { 00:38:23.538 "impl_name": "ssl", 00:38:23.538 "recv_buf_size": 4096, 00:38:23.538 "send_buf_size": 4096, 00:38:23.538 "enable_recv_pipe": true, 00:38:23.538 "enable_quickack": false, 00:38:23.538 "enable_placement_id": 0, 00:38:23.538 "enable_zerocopy_send_server": true, 00:38:23.538 "enable_zerocopy_send_client": false, 00:38:23.538 "zerocopy_threshold": 0, 00:38:23.538 "tls_version": 0, 00:38:23.538 "enable_ktls": false 00:38:23.538 } 00:38:23.538 }, 00:38:23.538 { 00:38:23.538 "method": "sock_impl_set_options", 00:38:23.538 "params": { 00:38:23.538 "impl_name": "posix", 00:38:23.538 "recv_buf_size": 2097152, 00:38:23.538 "send_buf_size": 2097152, 00:38:23.538 "enable_recv_pipe": true, 00:38:23.538 "enable_quickack": false, 00:38:23.538 "enable_placement_id": 0, 00:38:23.538 "enable_zerocopy_send_server": true, 00:38:23.538 "enable_zerocopy_send_client": false, 00:38:23.538 "zerocopy_threshold": 0, 00:38:23.538 "tls_version": 0, 00:38:23.538 "enable_ktls": false 00:38:23.538 } 00:38:23.538 }, 00:38:23.538 { 00:38:23.538 "method": "sock_impl_set_options", 00:38:23.538 "params": { 00:38:23.538 "impl_name": "uring", 00:38:23.538 "recv_buf_size": 2097152, 00:38:23.538 "send_buf_size": 2097152, 00:38:23.538 "enable_recv_pipe": true, 00:38:23.538 "enable_quickack": false, 00:38:23.538 "enable_placement_id": 0, 00:38:23.538 "enable_zerocopy_send_server": false, 00:38:23.538 "enable_zerocopy_send_client": false, 00:38:23.538 "zerocopy_threshold": 0, 00:38:23.538 "tls_version": 0, 00:38:23.538 "enable_ktls": false 00:38:23.538 } 00:38:23.538 } 00:38:23.538 ] 00:38:23.538 }, 00:38:23.538 { 00:38:23.538 "subsystem": "vmd", 00:38:23.538 "config": [] 00:38:23.538 }, 00:38:23.538 { 00:38:23.538 "subsystem": "accel", 00:38:23.538 "config": [ 00:38:23.538 { 00:38:23.538 "method": "accel_set_options", 00:38:23.538 "params": { 00:38:23.538 "small_cache_size": 128, 00:38:23.538 "large_cache_size": 16, 00:38:23.538 "task_count": 2048, 00:38:23.538 "sequence_count": 2048, 00:38:23.538 "buf_count": 2048 00:38:23.538 } 00:38:23.538 } 00:38:23.538 ] 00:38:23.538 }, 00:38:23.538 { 00:38:23.538 "subsystem": "bdev", 00:38:23.538 "config": [ 00:38:23.538 { 00:38:23.538 "method": "bdev_set_options", 00:38:23.538 "params": { 00:38:23.538 "bdev_io_pool_size": 65535, 00:38:23.538 "bdev_io_cache_size": 256, 00:38:23.538 "bdev_auto_examine": true, 00:38:23.538 "iobuf_small_cache_size": 128, 00:38:23.538 "iobuf_large_cache_size": 16 00:38:23.538 } 00:38:23.538 }, 00:38:23.538 { 00:38:23.538 "method": "bdev_raid_set_options", 00:38:23.538 "params": { 00:38:23.538 "process_window_size_kb": 1024, 00:38:23.538 "process_max_bandwidth_mb_sec": 0 00:38:23.538 } 00:38:23.538 }, 00:38:23.538 { 00:38:23.538 "method": "bdev_iscsi_set_options", 00:38:23.538 "params": { 00:38:23.538 "timeout_sec": 30 00:38:23.538 } 00:38:23.538 }, 00:38:23.538 { 00:38:23.538 "method": "bdev_nvme_set_options", 00:38:23.538 "params": { 00:38:23.538 "action_on_timeout": "none", 00:38:23.538 "timeout_us": 0, 00:38:23.538 "timeout_admin_us": 0, 00:38:23.538 "keep_alive_timeout_ms": 10000, 00:38:23.538 "arbitration_burst": 0, 00:38:23.538 "low_priority_weight": 0, 00:38:23.538 "medium_priority_weight": 0, 00:38:23.538 "high_priority_weight": 0, 00:38:23.538 "nvme_adminq_poll_period_us": 10000, 00:38:23.538 "nvme_ioq_poll_period_us": 0, 00:38:23.538 "io_queue_requests": 512, 00:38:23.538 "delay_cmd_submit": true, 00:38:23.538 "transport_retry_count": 4, 00:38:23.538 "bdev_retry_count": 3, 00:38:23.538 "transport_ack_timeout": 0, 00:38:23.538 "ctrlr_loss_timeout_sec": 0, 00:38:23.538 "reconnect_delay_sec": 0, 00:38:23.538 "fast_io_fail_timeout_sec": 0, 00:38:23.538 "disable_auto_failback": false, 00:38:23.538 "generate_uuids": false, 00:38:23.538 "transport_tos": 0, 00:38:23.538 "nvme_error_stat": false, 00:38:23.538 "rdma_srq_size": 0, 00:38:23.538 "io_path_stat": false, 00:38:23.538 "allow_accel_sequence": false, 00:38:23.538 "rdma_max_cq_size": 0, 00:38:23.538 "rdma_cm_event_timeout_ms": 0, 00:38:23.538 "dhchap_digests": [ 00:38:23.538 "sha256", 00:38:23.538 "sha384", 00:38:23.538 "sha512" 00:38:23.538 ], 00:38:23.538 "dhchap_dhgroups": [ 00:38:23.538 "null", 00:38:23.538 "ffdhe2048", 00:38:23.538 "ffdhe3072", 00:38:23.538 "ffdhe4096", 00:38:23.538 "ffdhe6144", 00:38:23.538 "ffdhe8192" 00:38:23.538 ] 00:38:23.538 } 00:38:23.538 }, 00:38:23.539 { 00:38:23.539 "method": "bdev_nvme_attach_controller", 00:38:23.539 "params": { 00:38:23.539 "name": "TLSTEST", 00:38:23.539 "trtype": "TCP", 00:38:23.539 "adrfam": "IPv4", 00:38:23.539 "traddr": "10.0.0.3", 00:38:23.539 "trsvcid": "4420", 00:38:23.539 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:23.539 "prchk_reftag": false, 00:38:23.539 "prchk_guard": false, 00:38:23.539 "ctrlr_loss_timeout_sec": 0, 00:38:23.539 "reconnect_delay_sec": 0, 00:38:23.539 "fast_io_fail_timeout_sec": 0, 00:38:23.539 "psk": "key0", 00:38:23.539 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:23.539 "hdgst": false, 00:38:23.539 "ddgst": false, 00:38:23.539 "multipath": "multipath" 00:38:23.539 } 00:38:23.539 }, 00:38:23.539 { 00:38:23.539 "method": "bdev_nvme_set_hotplug", 00:38:23.539 "params": { 00:38:23.539 "period_us": 100000, 00:38:23.539 "enable": false 00:38:23.539 } 00:38:23.539 }, 00:38:23.539 { 00:38:23.539 "method": "bdev_wait_for_examine" 00:38:23.539 } 00:38:23.539 ] 00:38:23.539 }, 00:38:23.539 { 00:38:23.539 "subsystem": "nbd", 00:38:23.539 "config": [] 00:38:23.539 } 00:38:23.539 ] 00:38:23.539 }' 00:38:23.539 13:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 72006 00:38:23.539 13:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72006 ']' 00:38:23.539 13:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72006 00:38:23.539 13:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:38:23.799 13:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:23.799 13:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72006 00:38:23.799 13:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:38:23.799 13:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:38:23.799 killing process with pid 72006 00:38:23.799 13:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72006' 00:38:23.799 13:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72006 00:38:23.799 Received shutdown signal, test time was about 10.000000 seconds 00:38:23.799 00:38:23.799 Latency(us) 00:38:23.799 [2024-11-20T13:59:21.122Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:23.799 [2024-11-20T13:59:21.122Z] =================================================================================================================== 00:38:23.799 [2024-11-20T13:59:21.122Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:38:23.799 13:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72006 00:38:23.799 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 71952 00:38:23.799 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71952 ']' 00:38:23.799 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71952 00:38:23.799 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:38:23.799 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:23.799 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71952 00:38:24.059 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:24.059 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:24.059 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71952' 00:38:24.059 killing process with pid 71952 00:38:24.059 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71952 00:38:24.059 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71952 00:38:24.059 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:38:24.059 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:24.059 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:24.059 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:38:24.059 "subsystems": [ 00:38:24.059 { 00:38:24.059 "subsystem": "keyring", 00:38:24.059 "config": [ 00:38:24.059 { 00:38:24.059 "method": "keyring_file_add_key", 00:38:24.059 "params": { 00:38:24.059 "name": "key0", 00:38:24.059 "path": "/tmp/tmp.XijMJyf6Ks" 00:38:24.059 } 00:38:24.059 } 00:38:24.059 ] 00:38:24.059 }, 00:38:24.059 { 00:38:24.059 "subsystem": "iobuf", 00:38:24.059 "config": [ 00:38:24.059 { 00:38:24.059 "method": "iobuf_set_options", 00:38:24.059 "params": { 00:38:24.059 "small_pool_count": 8192, 00:38:24.059 "large_pool_count": 1024, 00:38:24.059 "small_bufsize": 8192, 00:38:24.059 "large_bufsize": 135168, 00:38:24.059 "enable_numa": false 00:38:24.059 } 00:38:24.059 } 00:38:24.059 ] 00:38:24.059 }, 00:38:24.059 { 00:38:24.059 "subsystem": "sock", 00:38:24.059 "config": [ 00:38:24.059 { 00:38:24.059 "method": "sock_set_default_impl", 00:38:24.059 "params": { 00:38:24.059 "impl_name": "uring" 00:38:24.059 } 00:38:24.059 }, 00:38:24.059 { 00:38:24.059 "method": "sock_impl_set_options", 00:38:24.059 "params": { 00:38:24.059 "impl_name": "ssl", 00:38:24.060 "recv_buf_size": 4096, 00:38:24.060 "send_buf_size": 4096, 00:38:24.060 "enable_recv_pipe": true, 00:38:24.060 "enable_quickack": false, 00:38:24.060 "enable_placement_id": 0, 00:38:24.060 "enable_zerocopy_send_server": true, 00:38:24.060 "enable_zerocopy_send_client": false, 00:38:24.060 "zerocopy_threshold": 0, 00:38:24.060 "tls_version": 0, 00:38:24.060 "enable_ktls": false 00:38:24.060 } 00:38:24.060 }, 00:38:24.060 { 00:38:24.060 "method": "sock_impl_set_options", 00:38:24.060 "params": { 00:38:24.060 "impl_name": "posix", 00:38:24.060 "recv_buf_size": 2097152, 00:38:24.060 "send_buf_size": 2097152, 00:38:24.060 "enable_recv_pipe": true, 00:38:24.060 "enable_quickack": false, 00:38:24.060 "enable_placement_id": 0, 00:38:24.060 "enable_zerocopy_send_server": true, 00:38:24.060 "enable_zerocopy_send_client": false, 00:38:24.060 "zerocopy_threshold": 0, 00:38:24.060 "tls_version": 0, 00:38:24.060 "enable_ktls": false 00:38:24.060 } 00:38:24.060 }, 00:38:24.060 { 00:38:24.060 "method": "sock_impl_set_options", 00:38:24.060 "params": { 00:38:24.060 "impl_name": "uring", 00:38:24.060 "recv_buf_size": 2097152, 00:38:24.060 "send_buf_size": 2097152, 00:38:24.060 "enable_recv_pipe": true, 00:38:24.060 "enable_quickack": false, 00:38:24.060 "enable_placement_id": 0, 00:38:24.060 "enable_zerocopy_send_server": false, 00:38:24.060 "enable_zerocopy_send_client": false, 00:38:24.060 "zerocopy_threshold": 0, 00:38:24.060 "tls_version": 0, 00:38:24.060 "enable_ktls": false 00:38:24.060 } 00:38:24.060 } 00:38:24.060 ] 00:38:24.060 }, 00:38:24.060 { 00:38:24.060 "subsystem": "vmd", 00:38:24.060 "config": [] 00:38:24.060 }, 00:38:24.060 { 00:38:24.060 "subsystem": "accel", 00:38:24.060 "config": [ 00:38:24.060 { 00:38:24.060 "method": "accel_set_options", 00:38:24.060 "params": { 00:38:24.060 "small_cache_size": 128, 00:38:24.060 "large_cache_size": 16, 00:38:24.060 "task_count": 2048, 00:38:24.060 "sequence_count": 2048, 00:38:24.060 "buf_count": 2048 00:38:24.060 } 00:38:24.060 } 00:38:24.060 ] 00:38:24.060 }, 00:38:24.060 { 00:38:24.060 "subsystem": "bdev", 00:38:24.060 "config": [ 00:38:24.060 { 00:38:24.060 "method": "bdev_set_options", 00:38:24.060 "params": { 00:38:24.060 "bdev_io_pool_size": 65535, 00:38:24.060 "bdev_io_cache_size": 256, 00:38:24.060 "bdev_auto_examine": true, 00:38:24.060 "iobuf_small_cache_size": 128, 00:38:24.060 "iobuf_large_cache_size": 16 00:38:24.060 } 00:38:24.060 }, 00:38:24.060 { 00:38:24.060 "method": "bdev_raid_set_options", 00:38:24.060 "params": { 00:38:24.060 "process_window_size_kb": 1024, 00:38:24.060 "process_max_bandwidth_mb_sec": 0 00:38:24.060 } 00:38:24.060 }, 00:38:24.060 { 00:38:24.060 "method": "bdev_iscsi_set_options", 00:38:24.060 "params": { 00:38:24.060 "timeout_sec": 30 00:38:24.060 } 00:38:24.060 }, 00:38:24.060 { 00:38:24.060 "method": "bdev_nvme_set_options", 00:38:24.060 "params": { 00:38:24.060 "action_on_timeout": "none", 00:38:24.060 "timeout_us": 0, 00:38:24.060 "timeout_admin_us": 0, 00:38:24.060 "keep_alive_timeout_ms": 10000, 00:38:24.060 "arbitration_burst": 0, 00:38:24.060 "low_priority_weight": 0, 00:38:24.060 "medium_priority_weight": 0, 00:38:24.060 "high_priority_weight": 0, 00:38:24.060 "nvme_adminq_poll_period_us": 10000, 00:38:24.060 "nvme_ioq_poll_period_us": 0, 00:38:24.060 "io_queue_requests": 0, 00:38:24.060 "delay_cmd_submit": true, 00:38:24.060 "transport_retry_count": 4, 00:38:24.060 "bdev_retry_count": 3, 00:38:24.060 "transport_ack_timeout": 0, 00:38:24.060 "ctrlr_loss_timeout_sec": 0, 00:38:24.060 "reconnect_delay_sec": 0, 00:38:24.060 "fast_io_fail_timeout_sec": 0, 00:38:24.060 "disable_auto_failback": false, 00:38:24.060 "generate_uuids": false, 00:38:24.060 "transport_tos": 0, 00:38:24.060 "nvme_error_stat": false, 00:38:24.060 "rdma_srq_size": 0, 00:38:24.060 "io_path_stat": false, 00:38:24.060 "allow_accel_sequence": false, 00:38:24.060 "rdma_max_cq_size": 0, 00:38:24.060 "rdma_cm_event_timeout_ms": 0, 00:38:24.060 "dhchap_digests": [ 00:38:24.060 "sha256", 00:38:24.060 "sha384", 00:38:24.060 "sha512" 00:38:24.060 ], 00:38:24.060 "dhchap_dhgroups": [ 00:38:24.060 "null", 00:38:24.060 "ffdhe2048", 00:38:24.060 "ffdhe3072", 00:38:24.060 "ffdhe4096", 00:38:24.060 "ffdhe6144", 00:38:24.060 "ffdhe8192" 00:38:24.060 ] 00:38:24.060 } 00:38:24.060 }, 00:38:24.060 { 00:38:24.060 "method": "bdev_nvme_set_hotplug", 00:38:24.060 "params": { 00:38:24.060 "period_us": 100000, 00:38:24.060 "enable": false 00:38:24.060 } 00:38:24.060 }, 00:38:24.060 { 00:38:24.060 "method": "bdev_malloc_create", 00:38:24.060 "params": { 00:38:24.060 "name": "malloc0", 00:38:24.060 "num_blocks": 8192, 00:38:24.060 "block_size": 4096, 00:38:24.060 "physical_block_size": 4096, 00:38:24.060 "uuid": "7e45eede-0a55-4cf2-a407-0838d3a6e988", 00:38:24.060 "optimal_io_boundary": 0, 00:38:24.060 "md_size": 0, 00:38:24.060 "dif_type": 0, 00:38:24.060 "dif_is_head_of_md": false, 00:38:24.060 "dif_pi_format": 0 00:38:24.060 } 00:38:24.060 }, 00:38:24.060 { 00:38:24.060 "method": "bdev_wait_for_examine" 00:38:24.060 } 00:38:24.060 ] 00:38:24.060 }, 00:38:24.060 { 00:38:24.060 "subsystem": "nbd", 00:38:24.060 "config": [] 00:38:24.060 }, 00:38:24.060 { 00:38:24.060 "subsystem": "scheduler", 00:38:24.060 "config": [ 00:38:24.060 { 00:38:24.060 "method": "framework_set_scheduler", 00:38:24.060 "params": { 00:38:24.060 "name": "static" 00:38:24.060 } 00:38:24.060 } 00:38:24.060 ] 00:38:24.060 }, 00:38:24.060 { 00:38:24.061 "subsystem": "nvmf", 00:38:24.061 "config": [ 00:38:24.061 { 00:38:24.061 "method": "nvmf_set_config", 00:38:24.061 "params": { 00:38:24.061 "discovery_filter": "match_any", 00:38:24.061 "admin_cmd_passthru": { 00:38:24.061 "identify_ctrlr": false 00:38:24.061 }, 00:38:24.061 "dhchap_digests": [ 00:38:24.061 "sha256", 00:38:24.061 "sha384", 00:38:24.061 "sha512" 00:38:24.061 ], 00:38:24.061 "dhchap_dhgroups": [ 00:38:24.061 "null", 00:38:24.061 "ffdhe2048", 00:38:24.061 "ffdhe3072", 00:38:24.061 "ffdhe4096", 00:38:24.061 "ffdhe6144", 00:38:24.061 "ffdhe8192" 00:38:24.061 ] 00:38:24.061 } 00:38:24.061 }, 00:38:24.061 { 00:38:24.061 "method": "nvmf_set_max_subsystems", 00:38:24.061 "params": { 00:38:24.061 "max_subsystems": 1024 00:38:24.061 } 00:38:24.061 }, 00:38:24.061 { 00:38:24.061 "method": "nvmf_set_crdt", 00:38:24.061 "params": { 00:38:24.061 "crdt1": 0, 00:38:24.061 "crdt2": 0, 00:38:24.061 "crdt3": 0 00:38:24.061 } 00:38:24.061 }, 00:38:24.061 { 00:38:24.061 "method": "nvmf_create_transport", 00:38:24.061 "params": { 00:38:24.061 "trtype": "TCP", 00:38:24.061 "max_queue_depth": 128, 00:38:24.061 "max_io_qpairs_per_ctrlr": 127, 00:38:24.061 "in_capsule_data_size": 4096, 00:38:24.061 "max_io_size": 131072, 00:38:24.061 "io_unit_size": 131072, 00:38:24.061 "max_aq_depth": 128, 00:38:24.061 "num_shared_buffers": 511, 00:38:24.061 "buf_cache_size": 4294967295, 00:38:24.061 "dif_insert_or_strip": false, 00:38:24.061 "zcopy": false, 00:38:24.061 "c2h_success": false, 00:38:24.061 "sock_priority": 0, 00:38:24.061 "abort_timeout_sec": 1, 00:38:24.061 "ack_timeout": 0, 00:38:24.061 "data_wr_pool_size": 0 00:38:24.061 } 00:38:24.061 }, 00:38:24.061 { 00:38:24.061 "method": "nvmf_create_subsystem", 00:38:24.061 "params": { 00:38:24.061 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:38:24.061 "allow_any_host": false, 00:38:24.061 "serial_number": "SPDK00000000000001", 00:38:24.061 "model_number": "SPDK bdev Controller", 00:38:24.061 "max_namespaces": 10, 00:38:24.061 "min_cntlid": 1, 00:38:24.061 "max_cntlid": 65519, 00:38:24.061 "ana_reporting": false 00:38:24.061 } 00:38:24.061 }, 00:38:24.061 { 00:38:24.061 "method": "nvmf_subsystem_add_host", 00:38:24.061 "params": { 00:38:24.061 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:38:24.061 "host": "nqn.2016-06.io.spdk:host1", 00:38:24.061 "psk": "key0" 00:38:24.061 } 00:38:24.061 }, 00:38:24.061 { 00:38:24.061 "method": "nvmf_subsystem_add_ns", 00:38:24.061 "params": { 00:38:24.061 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:38:24.061 "namespace": { 00:38:24.061 "nsid": 1, 00:38:24.061 "bdev_name": "malloc0", 00:38:24.061 "nguid": "7E45EEDE0A554CF2A4070838D3A6E988", 00:38:24.061 "uuid": "7e45eede-0a55-4cf2-a407-0838d3a6e988", 00:38:24.061 "no_auto_visible": false 00:38:24.061 } 00:38:24.061 } 00:38:24.061 }, 00:38:24.061 { 00:38:24.061 "method": "nvmf_subsystem_add_listener", 00:38:24.061 "params": { 00:38:24.061 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:38:24.061 "listen_address": { 00:38:24.061 "trtype": "TCP", 00:38:24.061 "adrfam": "IPv4", 00:38:24.061 "traddr": "10.0.0.3", 00:38:24.061 "trsvcid": "4420" 00:38:24.061 }, 00:38:24.061 "secure_channel": true 00:38:24.061 } 00:38:24.061 } 00:38:24.061 ] 00:38:24.061 } 00:38:24.061 ] 00:38:24.061 }' 00:38:24.061 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:38:24.061 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72056 00:38:24.061 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:38:24.061 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72056 00:38:24.061 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72056 ']' 00:38:24.061 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:24.061 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:24.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:24.061 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:24.061 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:24.061 13:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:38:24.322 [2024-11-20 13:59:21.413896] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:38:24.322 [2024-11-20 13:59:21.413971] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:24.322 [2024-11-20 13:59:21.547059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:24.322 [2024-11-20 13:59:21.608069] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:24.322 [2024-11-20 13:59:21.608126] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:24.322 [2024-11-20 13:59:21.608133] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:24.322 [2024-11-20 13:59:21.608138] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:24.322 [2024-11-20 13:59:21.608142] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:24.322 [2024-11-20 13:59:21.608559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:24.582 [2024-11-20 13:59:21.768870] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:24.582 [2024-11-20 13:59:21.853547] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:24.582 [2024-11-20 13:59:21.885412] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:24.582 [2024-11-20 13:59:21.885603] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:38:25.152 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:25.152 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:38:25.152 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:25.152 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:25.152 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:38:25.152 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:25.152 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=72089 00:38:25.152 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 72089 /var/tmp/bdevperf.sock 00:38:25.152 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72089 ']' 00:38:25.152 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:25.152 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:25.153 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:38:25.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:25.153 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:25.153 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:25.153 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:38:25.153 "subsystems": [ 00:38:25.153 { 00:38:25.153 "subsystem": "keyring", 00:38:25.153 "config": [ 00:38:25.153 { 00:38:25.153 "method": "keyring_file_add_key", 00:38:25.153 "params": { 00:38:25.153 "name": "key0", 00:38:25.153 "path": "/tmp/tmp.XijMJyf6Ks" 00:38:25.153 } 00:38:25.153 } 00:38:25.153 ] 00:38:25.153 }, 00:38:25.153 { 00:38:25.153 "subsystem": "iobuf", 00:38:25.153 "config": [ 00:38:25.153 { 00:38:25.153 "method": "iobuf_set_options", 00:38:25.153 "params": { 00:38:25.153 "small_pool_count": 8192, 00:38:25.153 "large_pool_count": 1024, 00:38:25.153 "small_bufsize": 8192, 00:38:25.153 "large_bufsize": 135168, 00:38:25.153 "enable_numa": false 00:38:25.153 } 00:38:25.153 } 00:38:25.153 ] 00:38:25.153 }, 00:38:25.153 { 00:38:25.153 "subsystem": "sock", 00:38:25.153 "config": [ 00:38:25.153 { 00:38:25.153 "method": "sock_set_default_impl", 00:38:25.153 "params": { 00:38:25.153 "impl_name": "uring" 00:38:25.153 } 00:38:25.153 }, 00:38:25.153 { 00:38:25.153 "method": "sock_impl_set_options", 00:38:25.153 "params": { 00:38:25.153 "impl_name": "ssl", 00:38:25.153 "recv_buf_size": 4096, 00:38:25.153 "send_buf_size": 4096, 00:38:25.153 "enable_recv_pipe": true, 00:38:25.153 "enable_quickack": false, 00:38:25.153 "enable_placement_id": 0, 00:38:25.153 "enable_zerocopy_send_server": true, 00:38:25.153 "enable_zerocopy_send_client": false, 00:38:25.153 "zerocopy_threshold": 0, 00:38:25.153 "tls_version": 0, 00:38:25.153 "enable_ktls": false 00:38:25.153 } 00:38:25.153 }, 00:38:25.153 { 00:38:25.153 "method": "sock_impl_set_options", 00:38:25.153 "params": { 00:38:25.153 "impl_name": "posix", 00:38:25.153 "recv_buf_size": 2097152, 00:38:25.153 "send_buf_size": 2097152, 00:38:25.153 "enable_recv_pipe": true, 00:38:25.153 "enable_quickack": false, 00:38:25.153 "enable_placement_id": 0, 00:38:25.153 "enable_zerocopy_send_server": true, 00:38:25.153 "enable_zerocopy_send_client": false, 00:38:25.153 "zerocopy_threshold": 0, 00:38:25.153 "tls_version": 0, 00:38:25.153 "enable_ktls": false 00:38:25.153 } 00:38:25.153 }, 00:38:25.153 { 00:38:25.153 "method": "sock_impl_set_options", 00:38:25.153 "params": { 00:38:25.153 "impl_name": "uring", 00:38:25.153 "recv_buf_size": 2097152, 00:38:25.153 "send_buf_size": 2097152, 00:38:25.153 "enable_recv_pipe": true, 00:38:25.153 "enable_quickack": false, 00:38:25.153 "enable_placement_id": 0, 00:38:25.153 "enable_zerocopy_send_server": false, 00:38:25.153 "enable_zerocopy_send_client": false, 00:38:25.153 "zerocopy_threshold": 0, 00:38:25.153 "tls_version": 0, 00:38:25.153 "enable_ktls": false 00:38:25.153 } 00:38:25.153 } 00:38:25.153 ] 00:38:25.153 }, 00:38:25.153 { 00:38:25.153 "subsystem": "vmd", 00:38:25.153 "config": [] 00:38:25.153 }, 00:38:25.153 { 00:38:25.153 "subsystem": "accel", 00:38:25.153 "config": [ 00:38:25.153 { 00:38:25.153 "method": "accel_set_options", 00:38:25.153 "params": { 00:38:25.153 "small_cache_size": 128, 00:38:25.153 "large_cache_size": 16, 00:38:25.153 "task_count": 2048, 00:38:25.153 "sequence_count": 2048, 00:38:25.153 "buf_count": 2048 00:38:25.153 } 00:38:25.153 } 00:38:25.153 ] 00:38:25.153 }, 00:38:25.153 { 00:38:25.153 "subsystem": "bdev", 00:38:25.153 "config": [ 00:38:25.153 { 00:38:25.153 "method": "bdev_set_options", 00:38:25.153 "params": { 00:38:25.153 "bdev_io_pool_size": 65535, 00:38:25.153 "bdev_io_cache_size": 256, 00:38:25.153 "bdev_auto_examine": true, 00:38:25.153 "iobuf_small_cache_size": 128, 00:38:25.153 "iobuf_large_cache_size": 16 00:38:25.153 } 00:38:25.153 }, 00:38:25.153 { 00:38:25.153 "method": "bdev_raid_set_options", 00:38:25.153 "params": { 00:38:25.153 "process_window_size_kb": 1024, 00:38:25.153 "process_max_bandwidth_mb_sec": 0 00:38:25.153 } 00:38:25.153 }, 00:38:25.153 { 00:38:25.153 "method": "bdev_iscsi_set_options", 00:38:25.153 "params": { 00:38:25.153 "timeout_sec": 30 00:38:25.153 } 00:38:25.153 }, 00:38:25.153 { 00:38:25.153 "method": "bdev_nvme_set_options", 00:38:25.153 "params": { 00:38:25.153 "action_on_timeout": "none", 00:38:25.153 "timeout_us": 0, 00:38:25.153 "timeout_admin_us": 0, 00:38:25.153 "keep_alive_timeout_ms": 10000, 00:38:25.153 "arbitration_burst": 0, 00:38:25.153 "low_priority_weight": 0, 00:38:25.153 "medium_priority_weight": 0, 00:38:25.153 "high_priority_weight": 0, 00:38:25.153 "nvme_adminq_poll_period_us": 10000, 00:38:25.153 "nvme_ioq_poll_period_us": 0, 00:38:25.153 "io_queue_requests": 512, 00:38:25.153 "delay_cmd_submit": true, 00:38:25.153 "transport_retry_count": 4, 00:38:25.153 "bdev_retry_count": 3, 00:38:25.153 "transport_ack_timeout": 0, 00:38:25.153 "ctrlr_loss_timeout_sec": 0, 00:38:25.154 "reconnect_delay_sec": 0, 00:38:25.154 "fast_io_fail_timeout_sec": 0, 00:38:25.154 "disable_auto_failback": false, 00:38:25.154 "generate_uuids": false, 00:38:25.154 "transport_tos": 0, 00:38:25.154 "nvme_error_stat": false, 00:38:25.154 "rdma_srq_size": 0, 00:38:25.154 "io_path_stat": false, 00:38:25.154 "allow_accel_sequence": false, 00:38:25.154 "rdma_max_cq_size": 0, 00:38:25.154 "rdma_cm_event_timeout_ms": 0, 00:38:25.154 "dhchap_digests": [ 00:38:25.154 "sha256", 00:38:25.154 "sha384", 00:38:25.154 "sha512" 00:38:25.154 ], 00:38:25.154 "dhchap_dhgroups": [ 00:38:25.154 "null", 00:38:25.154 "ffdhe2048", 00:38:25.154 "ffdhe3072", 00:38:25.154 "ffdhe4096", 00:38:25.154 "ffdhe6144", 00:38:25.154 "ffdhe8192" 00:38:25.154 ] 00:38:25.154 } 00:38:25.154 }, 00:38:25.154 { 00:38:25.154 "method": "bdev_nvme_attach_controller", 00:38:25.154 "params": { 00:38:25.154 "name": "TLSTEST", 00:38:25.154 "trtype": "TCP", 00:38:25.154 "adrfam": "IPv4", 00:38:25.154 "traddr": "10.0.0.3", 00:38:25.154 "trsvcid": "4420", 00:38:25.154 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:25.154 "prchk_reftag": false, 00:38:25.154 "prchk_guard": false, 00:38:25.154 "ctrlr_loss_timeout_sec": 0, 00:38:25.154 "reconnect_delay_sec": 0, 00:38:25.154 "fast_io_fail_timeout_sec": 0, 00:38:25.154 "psk": "key0", 00:38:25.154 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:25.154 "hdgst": false, 00:38:25.154 "ddgst": false, 00:38:25.154 "multipath": "multipath" 00:38:25.154 } 00:38:25.154 }, 00:38:25.154 { 00:38:25.154 "method": "bdev_nvme_set_hotplug", 00:38:25.154 "params": { 00:38:25.154 "period_us": 100000, 00:38:25.154 "enable": false 00:38:25.154 } 00:38:25.154 }, 00:38:25.154 { 00:38:25.154 "method": "bdev_wait_for_examine" 00:38:25.154 } 00:38:25.154 ] 00:38:25.154 }, 00:38:25.154 { 00:38:25.154 "subsystem": "nbd", 00:38:25.154 "config": [] 00:38:25.154 } 00:38:25.154 ] 00:38:25.154 }' 00:38:25.154 13:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:38:25.154 [2024-11-20 13:59:22.446119] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:38:25.154 [2024-11-20 13:59:22.446219] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72089 ] 00:38:25.413 [2024-11-20 13:59:22.593632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:25.413 [2024-11-20 13:59:22.649637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:25.671 [2024-11-20 13:59:22.772349] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:25.671 [2024-11-20 13:59:22.821770] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:26.239 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:26.239 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:38:26.239 13:59:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:38:26.239 Running I/O for 10 seconds... 00:38:28.118 5245.00 IOPS, 20.49 MiB/s [2024-11-20T13:59:26.822Z] 5167.50 IOPS, 20.19 MiB/s [2024-11-20T13:59:27.762Z] 5255.67 IOPS, 20.53 MiB/s [2024-11-20T13:59:28.702Z] 5364.50 IOPS, 20.96 MiB/s [2024-11-20T13:59:29.641Z] 5468.80 IOPS, 21.36 MiB/s [2024-11-20T13:59:30.580Z] 5541.33 IOPS, 21.65 MiB/s [2024-11-20T13:59:31.520Z] 5571.43 IOPS, 21.76 MiB/s [2024-11-20T13:59:32.460Z] 5587.38 IOPS, 21.83 MiB/s [2024-11-20T13:59:33.399Z] 5588.67 IOPS, 21.83 MiB/s [2024-11-20T13:59:33.662Z] 5586.00 IOPS, 21.82 MiB/s 00:38:36.339 Latency(us) 00:38:36.339 [2024-11-20T13:59:33.662Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:36.339 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:38:36.339 Verification LBA range: start 0x0 length 0x2000 00:38:36.339 TLSTESTn1 : 10.01 5592.40 21.85 0.00 0.00 22852.72 3777.62 18544.68 00:38:36.339 [2024-11-20T13:59:33.662Z] =================================================================================================================== 00:38:36.339 [2024-11-20T13:59:33.662Z] Total : 5592.40 21.85 0.00 0.00 22852.72 3777.62 18544.68 00:38:36.339 { 00:38:36.339 "results": [ 00:38:36.339 { 00:38:36.339 "job": "TLSTESTn1", 00:38:36.339 "core_mask": "0x4", 00:38:36.339 "workload": "verify", 00:38:36.339 "status": "finished", 00:38:36.339 "verify_range": { 00:38:36.339 "start": 0, 00:38:36.339 "length": 8192 00:38:36.339 }, 00:38:36.339 "queue_depth": 128, 00:38:36.339 "io_size": 4096, 00:38:36.339 "runtime": 10.011259, 00:38:36.339 "iops": 5592.403512884843, 00:38:36.339 "mibps": 21.845326222206417, 00:38:36.339 "io_failed": 0, 00:38:36.339 "io_timeout": 0, 00:38:36.339 "avg_latency_us": 22852.723594864467, 00:38:36.339 "min_latency_us": 3777.62096069869, 00:38:36.339 "max_latency_us": 18544.684716157204 00:38:36.339 } 00:38:36.339 ], 00:38:36.339 "core_count": 1 00:38:36.339 } 00:38:36.339 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:36.339 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 72089 00:38:36.339 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72089 ']' 00:38:36.339 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72089 00:38:36.339 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:38:36.339 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:36.339 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72089 00:38:36.339 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:38:36.339 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:38:36.339 killing process with pid 72089 00:38:36.339 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72089' 00:38:36.339 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72089 00:38:36.339 Received shutdown signal, test time was about 10.000000 seconds 00:38:36.339 00:38:36.339 Latency(us) 00:38:36.339 [2024-11-20T13:59:33.662Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:36.339 [2024-11-20T13:59:33.662Z] =================================================================================================================== 00:38:36.339 [2024-11-20T13:59:33.662Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:36.339 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72089 00:38:36.601 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 72056 00:38:36.601 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72056 ']' 00:38:36.601 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72056 00:38:36.601 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:38:36.601 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:36.601 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72056 00:38:36.601 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:36.601 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:36.601 killing process with pid 72056 00:38:36.601 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72056' 00:38:36.601 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72056 00:38:36.601 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72056 00:38:36.601 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:38:36.601 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:36.601 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:36.601 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:38:36.860 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72222 00:38:36.861 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:38:36.861 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72222 00:38:36.861 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72222 ']' 00:38:36.861 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:36.861 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:36.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:36.861 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:36.861 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:36.861 13:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:38:36.861 [2024-11-20 13:59:33.986673] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:38:36.861 [2024-11-20 13:59:33.986760] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:36.861 [2024-11-20 13:59:34.119662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:37.120 [2024-11-20 13:59:34.203981] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:37.120 [2024-11-20 13:59:34.204039] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:37.120 [2024-11-20 13:59:34.204047] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:37.120 [2024-11-20 13:59:34.204054] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:37.120 [2024-11-20 13:59:34.204059] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:37.120 [2024-11-20 13:59:34.204416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:37.120 [2024-11-20 13:59:34.274254] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:37.690 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:37.690 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:38:37.690 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:37.690 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:37.690 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:38:37.690 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:37.690 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.XijMJyf6Ks 00:38:37.690 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.XijMJyf6Ks 00:38:37.690 13:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:38:37.949 [2024-11-20 13:59:35.189080] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:37.949 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:38:38.209 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:38:38.468 [2024-11-20 13:59:35.644288] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:38.468 [2024-11-20 13:59:35.644536] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:38:38.468 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:38:38.728 malloc0 00:38:38.728 13:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:38:38.989 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.XijMJyf6Ks 00:38:39.248 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:38:39.248 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:38:39.248 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=72277 00:38:39.248 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:38:39.248 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 72277 /var/tmp/bdevperf.sock 00:38:39.248 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72277 ']' 00:38:39.248 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:39.248 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:39.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:39.248 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:39.248 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:39.248 13:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:38:39.508 [2024-11-20 13:59:36.579229] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:38:39.508 [2024-11-20 13:59:36.579304] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72277 ] 00:38:39.509 [2024-11-20 13:59:36.731547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:39.509 [2024-11-20 13:59:36.804285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:39.769 [2024-11-20 13:59:36.881275] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:40.339 13:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:40.339 13:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:38:40.339 13:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.XijMJyf6Ks 00:38:40.599 13:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:38:40.858 [2024-11-20 13:59:37.973609] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:40.858 nvme0n1 00:38:40.858 13:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:41.117 Running I/O for 1 seconds... 00:38:42.055 5143.00 IOPS, 20.09 MiB/s 00:38:42.055 Latency(us) 00:38:42.055 [2024-11-20T13:59:39.378Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:42.055 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:38:42.055 Verification LBA range: start 0x0 length 0x2000 00:38:42.055 nvme0n1 : 1.01 5201.19 20.32 0.00 0.00 24415.95 5094.06 20032.84 00:38:42.055 [2024-11-20T13:59:39.378Z] =================================================================================================================== 00:38:42.055 [2024-11-20T13:59:39.378Z] Total : 5201.19 20.32 0.00 0.00 24415.95 5094.06 20032.84 00:38:42.055 { 00:38:42.055 "results": [ 00:38:42.055 { 00:38:42.055 "job": "nvme0n1", 00:38:42.055 "core_mask": "0x2", 00:38:42.055 "workload": "verify", 00:38:42.055 "status": "finished", 00:38:42.055 "verify_range": { 00:38:42.055 "start": 0, 00:38:42.055 "length": 8192 00:38:42.055 }, 00:38:42.055 "queue_depth": 128, 00:38:42.055 "io_size": 4096, 00:38:42.055 "runtime": 1.013422, 00:38:42.055 "iops": 5201.1896327492395, 00:38:42.055 "mibps": 20.317147002926717, 00:38:42.055 "io_failed": 0, 00:38:42.055 "io_timeout": 0, 00:38:42.055 "avg_latency_us": 24415.948224568972, 00:38:42.055 "min_latency_us": 5094.064628820961, 00:38:42.055 "max_latency_us": 20032.838427947598 00:38:42.055 } 00:38:42.055 ], 00:38:42.055 "core_count": 1 00:38:42.055 } 00:38:42.055 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 72277 00:38:42.055 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72277 ']' 00:38:42.055 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72277 00:38:42.055 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:38:42.055 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:42.055 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72277 00:38:42.055 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:42.055 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:42.055 killing process with pid 72277 00:38:42.055 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72277' 00:38:42.055 Received shutdown signal, test time was about 1.000000 seconds 00:38:42.055 00:38:42.055 Latency(us) 00:38:42.055 [2024-11-20T13:59:39.378Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:42.055 [2024-11-20T13:59:39.378Z] =================================================================================================================== 00:38:42.055 [2024-11-20T13:59:39.378Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:42.055 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72277 00:38:42.055 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72277 00:38:42.314 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 72222 00:38:42.314 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72222 ']' 00:38:42.314 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72222 00:38:42.314 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:38:42.314 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:42.314 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72222 00:38:42.314 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:42.314 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:42.314 killing process with pid 72222 00:38:42.314 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72222' 00:38:42.314 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72222 00:38:42.314 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72222 00:38:42.575 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:38:42.575 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:42.575 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:42.575 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:38:42.575 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72334 00:38:42.575 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:38:42.575 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72334 00:38:42.575 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72334 ']' 00:38:42.575 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:42.575 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:42.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:42.575 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:42.575 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:42.575 13:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:38:42.834 [2024-11-20 13:59:39.919361] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:38:42.835 [2024-11-20 13:59:39.919458] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:42.835 [2024-11-20 13:59:40.070823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:42.835 [2024-11-20 13:59:40.137342] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:42.835 [2024-11-20 13:59:40.137400] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:42.835 [2024-11-20 13:59:40.137407] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:42.835 [2024-11-20 13:59:40.137412] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:42.835 [2024-11-20 13:59:40.137417] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:42.835 [2024-11-20 13:59:40.137720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:43.094 [2024-11-20 13:59:40.210189] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:43.669 13:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:43.669 13:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:38:43.669 13:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:43.669 13:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:43.669 13:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:38:43.669 13:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:43.669 13:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:38:43.669 13:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:43.669 13:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:38:43.669 [2024-11-20 13:59:40.907716] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:43.669 malloc0 00:38:43.669 [2024-11-20 13:59:40.937869] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:43.669 [2024-11-20 13:59:40.938079] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:38:43.669 13:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:43.669 13:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=72365 00:38:43.669 13:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:38:43.669 13:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 72365 /var/tmp/bdevperf.sock 00:38:43.669 13:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72365 ']' 00:38:43.669 13:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:43.669 13:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:43.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:43.669 13:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:43.669 13:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:43.669 13:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:38:43.937 [2024-11-20 13:59:41.024308] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:38:43.937 [2024-11-20 13:59:41.024377] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72365 ] 00:38:43.937 [2024-11-20 13:59:41.174671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:43.937 [2024-11-20 13:59:41.253599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:44.198 [2024-11-20 13:59:41.330266] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:44.768 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:44.768 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:38:44.768 13:59:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.XijMJyf6Ks 00:38:45.028 13:59:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:38:45.028 [2024-11-20 13:59:42.299261] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:45.287 nvme0n1 00:38:45.287 13:59:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:45.287 Running I/O for 1 seconds... 00:38:46.225 5888.00 IOPS, 23.00 MiB/s 00:38:46.225 Latency(us) 00:38:46.225 [2024-11-20T13:59:43.548Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:46.225 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:38:46.225 Verification LBA range: start 0x0 length 0x2000 00:38:46.225 nvme0n1 : 1.02 5889.06 23.00 0.00 0.00 21557.98 5494.72 13851.28 00:38:46.225 [2024-11-20T13:59:43.548Z] =================================================================================================================== 00:38:46.225 [2024-11-20T13:59:43.548Z] Total : 5889.06 23.00 0.00 0.00 21557.98 5494.72 13851.28 00:38:46.225 { 00:38:46.225 "results": [ 00:38:46.225 { 00:38:46.225 "job": "nvme0n1", 00:38:46.225 "core_mask": "0x2", 00:38:46.225 "workload": "verify", 00:38:46.225 "status": "finished", 00:38:46.225 "verify_range": { 00:38:46.225 "start": 0, 00:38:46.225 "length": 8192 00:38:46.225 }, 00:38:46.225 "queue_depth": 128, 00:38:46.225 "io_size": 4096, 00:38:46.225 "runtime": 1.021555, 00:38:46.225 "iops": 5889.061284022887, 00:38:46.225 "mibps": 23.004145640714402, 00:38:46.225 "io_failed": 0, 00:38:46.225 "io_timeout": 0, 00:38:46.225 "avg_latency_us": 21557.980488711328, 00:38:46.225 "min_latency_us": 5494.721397379913, 00:38:46.225 "max_latency_us": 13851.276855895196 00:38:46.225 } 00:38:46.225 ], 00:38:46.225 "core_count": 1 00:38:46.225 } 00:38:46.225 13:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:38:46.225 13:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:46.225 13:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:38:46.485 13:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:46.485 13:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:38:46.485 "subsystems": [ 00:38:46.485 { 00:38:46.485 "subsystem": "keyring", 00:38:46.485 "config": [ 00:38:46.485 { 00:38:46.485 "method": "keyring_file_add_key", 00:38:46.485 "params": { 00:38:46.485 "name": "key0", 00:38:46.485 "path": "/tmp/tmp.XijMJyf6Ks" 00:38:46.485 } 00:38:46.485 } 00:38:46.485 ] 00:38:46.485 }, 00:38:46.485 { 00:38:46.485 "subsystem": "iobuf", 00:38:46.485 "config": [ 00:38:46.485 { 00:38:46.485 "method": "iobuf_set_options", 00:38:46.485 "params": { 00:38:46.485 "small_pool_count": 8192, 00:38:46.485 "large_pool_count": 1024, 00:38:46.485 "small_bufsize": 8192, 00:38:46.485 "large_bufsize": 135168, 00:38:46.485 "enable_numa": false 00:38:46.485 } 00:38:46.485 } 00:38:46.485 ] 00:38:46.485 }, 00:38:46.485 { 00:38:46.485 "subsystem": "sock", 00:38:46.485 "config": [ 00:38:46.485 { 00:38:46.485 "method": "sock_set_default_impl", 00:38:46.485 "params": { 00:38:46.485 "impl_name": "uring" 00:38:46.485 } 00:38:46.485 }, 00:38:46.485 { 00:38:46.485 "method": "sock_impl_set_options", 00:38:46.485 "params": { 00:38:46.485 "impl_name": "ssl", 00:38:46.485 "recv_buf_size": 4096, 00:38:46.485 "send_buf_size": 4096, 00:38:46.485 "enable_recv_pipe": true, 00:38:46.485 "enable_quickack": false, 00:38:46.485 "enable_placement_id": 0, 00:38:46.485 "enable_zerocopy_send_server": true, 00:38:46.485 "enable_zerocopy_send_client": false, 00:38:46.485 "zerocopy_threshold": 0, 00:38:46.485 "tls_version": 0, 00:38:46.485 "enable_ktls": false 00:38:46.485 } 00:38:46.485 }, 00:38:46.485 { 00:38:46.485 "method": "sock_impl_set_options", 00:38:46.485 "params": { 00:38:46.485 "impl_name": "posix", 00:38:46.485 "recv_buf_size": 2097152, 00:38:46.486 "send_buf_size": 2097152, 00:38:46.486 "enable_recv_pipe": true, 00:38:46.486 "enable_quickack": false, 00:38:46.486 "enable_placement_id": 0, 00:38:46.486 "enable_zerocopy_send_server": true, 00:38:46.486 "enable_zerocopy_send_client": false, 00:38:46.486 "zerocopy_threshold": 0, 00:38:46.486 "tls_version": 0, 00:38:46.486 "enable_ktls": false 00:38:46.486 } 00:38:46.486 }, 00:38:46.486 { 00:38:46.486 "method": "sock_impl_set_options", 00:38:46.486 "params": { 00:38:46.486 "impl_name": "uring", 00:38:46.486 "recv_buf_size": 2097152, 00:38:46.486 "send_buf_size": 2097152, 00:38:46.486 "enable_recv_pipe": true, 00:38:46.486 "enable_quickack": false, 00:38:46.486 "enable_placement_id": 0, 00:38:46.486 "enable_zerocopy_send_server": false, 00:38:46.486 "enable_zerocopy_send_client": false, 00:38:46.486 "zerocopy_threshold": 0, 00:38:46.486 "tls_version": 0, 00:38:46.486 "enable_ktls": false 00:38:46.486 } 00:38:46.486 } 00:38:46.486 ] 00:38:46.486 }, 00:38:46.486 { 00:38:46.486 "subsystem": "vmd", 00:38:46.486 "config": [] 00:38:46.486 }, 00:38:46.486 { 00:38:46.486 "subsystem": "accel", 00:38:46.486 "config": [ 00:38:46.486 { 00:38:46.486 "method": "accel_set_options", 00:38:46.486 "params": { 00:38:46.486 "small_cache_size": 128, 00:38:46.486 "large_cache_size": 16, 00:38:46.486 "task_count": 2048, 00:38:46.486 "sequence_count": 2048, 00:38:46.486 "buf_count": 2048 00:38:46.486 } 00:38:46.486 } 00:38:46.486 ] 00:38:46.486 }, 00:38:46.486 { 00:38:46.486 "subsystem": "bdev", 00:38:46.486 "config": [ 00:38:46.486 { 00:38:46.486 "method": "bdev_set_options", 00:38:46.486 "params": { 00:38:46.486 "bdev_io_pool_size": 65535, 00:38:46.486 "bdev_io_cache_size": 256, 00:38:46.486 "bdev_auto_examine": true, 00:38:46.486 "iobuf_small_cache_size": 128, 00:38:46.486 "iobuf_large_cache_size": 16 00:38:46.486 } 00:38:46.486 }, 00:38:46.486 { 00:38:46.486 "method": "bdev_raid_set_options", 00:38:46.486 "params": { 00:38:46.486 "process_window_size_kb": 1024, 00:38:46.486 "process_max_bandwidth_mb_sec": 0 00:38:46.486 } 00:38:46.486 }, 00:38:46.486 { 00:38:46.486 "method": "bdev_iscsi_set_options", 00:38:46.486 "params": { 00:38:46.486 "timeout_sec": 30 00:38:46.486 } 00:38:46.486 }, 00:38:46.486 { 00:38:46.486 "method": "bdev_nvme_set_options", 00:38:46.486 "params": { 00:38:46.486 "action_on_timeout": "none", 00:38:46.486 "timeout_us": 0, 00:38:46.486 "timeout_admin_us": 0, 00:38:46.486 "keep_alive_timeout_ms": 10000, 00:38:46.486 "arbitration_burst": 0, 00:38:46.486 "low_priority_weight": 0, 00:38:46.486 "medium_priority_weight": 0, 00:38:46.486 "high_priority_weight": 0, 00:38:46.486 "nvme_adminq_poll_period_us": 10000, 00:38:46.486 "nvme_ioq_poll_period_us": 0, 00:38:46.486 "io_queue_requests": 0, 00:38:46.486 "delay_cmd_submit": true, 00:38:46.486 "transport_retry_count": 4, 00:38:46.486 "bdev_retry_count": 3, 00:38:46.486 "transport_ack_timeout": 0, 00:38:46.486 "ctrlr_loss_timeout_sec": 0, 00:38:46.486 "reconnect_delay_sec": 0, 00:38:46.486 "fast_io_fail_timeout_sec": 0, 00:38:46.486 "disable_auto_failback": false, 00:38:46.486 "generate_uuids": false, 00:38:46.486 "transport_tos": 0, 00:38:46.486 "nvme_error_stat": false, 00:38:46.486 "rdma_srq_size": 0, 00:38:46.486 "io_path_stat": false, 00:38:46.486 "allow_accel_sequence": false, 00:38:46.486 "rdma_max_cq_size": 0, 00:38:46.486 "rdma_cm_event_timeout_ms": 0, 00:38:46.486 "dhchap_digests": [ 00:38:46.486 "sha256", 00:38:46.486 "sha384", 00:38:46.486 "sha512" 00:38:46.486 ], 00:38:46.486 "dhchap_dhgroups": [ 00:38:46.486 "null", 00:38:46.486 "ffdhe2048", 00:38:46.486 "ffdhe3072", 00:38:46.486 "ffdhe4096", 00:38:46.486 "ffdhe6144", 00:38:46.486 "ffdhe8192" 00:38:46.486 ] 00:38:46.486 } 00:38:46.486 }, 00:38:46.486 { 00:38:46.486 "method": "bdev_nvme_set_hotplug", 00:38:46.486 "params": { 00:38:46.486 "period_us": 100000, 00:38:46.486 "enable": false 00:38:46.486 } 00:38:46.486 }, 00:38:46.486 { 00:38:46.486 "method": "bdev_malloc_create", 00:38:46.486 "params": { 00:38:46.486 "name": "malloc0", 00:38:46.486 "num_blocks": 8192, 00:38:46.486 "block_size": 4096, 00:38:46.486 "physical_block_size": 4096, 00:38:46.486 "uuid": "9c162518-4743-4814-965a-f559a1fe1a7a", 00:38:46.486 "optimal_io_boundary": 0, 00:38:46.486 "md_size": 0, 00:38:46.486 "dif_type": 0, 00:38:46.486 "dif_is_head_of_md": false, 00:38:46.486 "dif_pi_format": 0 00:38:46.486 } 00:38:46.486 }, 00:38:46.486 { 00:38:46.486 "method": "bdev_wait_for_examine" 00:38:46.486 } 00:38:46.486 ] 00:38:46.486 }, 00:38:46.486 { 00:38:46.486 "subsystem": "nbd", 00:38:46.486 "config": [] 00:38:46.486 }, 00:38:46.486 { 00:38:46.486 "subsystem": "scheduler", 00:38:46.486 "config": [ 00:38:46.486 { 00:38:46.486 "method": "framework_set_scheduler", 00:38:46.486 "params": { 00:38:46.486 "name": "static" 00:38:46.486 } 00:38:46.486 } 00:38:46.486 ] 00:38:46.486 }, 00:38:46.486 { 00:38:46.486 "subsystem": "nvmf", 00:38:46.486 "config": [ 00:38:46.486 { 00:38:46.486 "method": "nvmf_set_config", 00:38:46.486 "params": { 00:38:46.486 "discovery_filter": "match_any", 00:38:46.486 "admin_cmd_passthru": { 00:38:46.486 "identify_ctrlr": false 00:38:46.486 }, 00:38:46.486 "dhchap_digests": [ 00:38:46.486 "sha256", 00:38:46.486 "sha384", 00:38:46.486 "sha512" 00:38:46.486 ], 00:38:46.486 "dhchap_dhgroups": [ 00:38:46.486 "null", 00:38:46.486 "ffdhe2048", 00:38:46.486 "ffdhe3072", 00:38:46.486 "ffdhe4096", 00:38:46.486 "ffdhe6144", 00:38:46.486 "ffdhe8192" 00:38:46.486 ] 00:38:46.486 } 00:38:46.486 }, 00:38:46.486 { 00:38:46.486 "method": "nvmf_set_max_subsystems", 00:38:46.486 "params": { 00:38:46.486 "max_subsystems": 1024 00:38:46.486 } 00:38:46.486 }, 00:38:46.486 { 00:38:46.486 "method": "nvmf_set_crdt", 00:38:46.486 "params": { 00:38:46.486 "crdt1": 0, 00:38:46.486 "crdt2": 0, 00:38:46.486 "crdt3": 0 00:38:46.486 } 00:38:46.486 }, 00:38:46.486 { 00:38:46.486 "method": "nvmf_create_transport", 00:38:46.486 "params": { 00:38:46.486 "trtype": "TCP", 00:38:46.486 "max_queue_depth": 128, 00:38:46.486 "max_io_qpairs_per_ctrlr": 127, 00:38:46.486 "in_capsule_data_size": 4096, 00:38:46.486 "max_io_size": 131072, 00:38:46.486 "io_unit_size": 131072, 00:38:46.486 "max_aq_depth": 128, 00:38:46.486 "num_shared_buffers": 511, 00:38:46.486 "buf_cache_size": 4294967295, 00:38:46.486 "dif_insert_or_strip": false, 00:38:46.486 "zcopy": false, 00:38:46.487 "c2h_success": false, 00:38:46.487 "sock_priority": 0, 00:38:46.487 "abort_timeout_sec": 1, 00:38:46.487 "ack_timeout": 0, 00:38:46.487 "data_wr_pool_size": 0 00:38:46.487 } 00:38:46.487 }, 00:38:46.487 { 00:38:46.487 "method": "nvmf_create_subsystem", 00:38:46.487 "params": { 00:38:46.487 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:38:46.487 "allow_any_host": false, 00:38:46.487 "serial_number": "00000000000000000000", 00:38:46.487 "model_number": "SPDK bdev Controller", 00:38:46.487 "max_namespaces": 32, 00:38:46.487 "min_cntlid": 1, 00:38:46.487 "max_cntlid": 65519, 00:38:46.487 "ana_reporting": false 00:38:46.487 } 00:38:46.487 }, 00:38:46.487 { 00:38:46.487 "method": "nvmf_subsystem_add_host", 00:38:46.487 "params": { 00:38:46.487 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:38:46.487 "host": "nqn.2016-06.io.spdk:host1", 00:38:46.487 "psk": "key0" 00:38:46.487 } 00:38:46.487 }, 00:38:46.487 { 00:38:46.487 "method": "nvmf_subsystem_add_ns", 00:38:46.487 "params": { 00:38:46.487 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:38:46.487 "namespace": { 00:38:46.487 "nsid": 1, 00:38:46.487 "bdev_name": "malloc0", 00:38:46.487 "nguid": "9C16251847434814965AF559A1FE1A7A", 00:38:46.487 "uuid": "9c162518-4743-4814-965a-f559a1fe1a7a", 00:38:46.487 "no_auto_visible": false 00:38:46.487 } 00:38:46.487 } 00:38:46.487 }, 00:38:46.487 { 00:38:46.487 "method": "nvmf_subsystem_add_listener", 00:38:46.487 "params": { 00:38:46.487 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:38:46.487 "listen_address": { 00:38:46.487 "trtype": "TCP", 00:38:46.487 "adrfam": "IPv4", 00:38:46.487 "traddr": "10.0.0.3", 00:38:46.487 "trsvcid": "4420" 00:38:46.487 }, 00:38:46.487 "secure_channel": false, 00:38:46.487 "sock_impl": "ssl" 00:38:46.487 } 00:38:46.487 } 00:38:46.487 ] 00:38:46.487 } 00:38:46.487 ] 00:38:46.487 }' 00:38:46.487 13:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:38:46.748 13:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:38:46.748 "subsystems": [ 00:38:46.748 { 00:38:46.748 "subsystem": "keyring", 00:38:46.748 "config": [ 00:38:46.748 { 00:38:46.748 "method": "keyring_file_add_key", 00:38:46.748 "params": { 00:38:46.748 "name": "key0", 00:38:46.748 "path": "/tmp/tmp.XijMJyf6Ks" 00:38:46.748 } 00:38:46.748 } 00:38:46.748 ] 00:38:46.748 }, 00:38:46.748 { 00:38:46.748 "subsystem": "iobuf", 00:38:46.748 "config": [ 00:38:46.748 { 00:38:46.748 "method": "iobuf_set_options", 00:38:46.748 "params": { 00:38:46.748 "small_pool_count": 8192, 00:38:46.748 "large_pool_count": 1024, 00:38:46.748 "small_bufsize": 8192, 00:38:46.748 "large_bufsize": 135168, 00:38:46.748 "enable_numa": false 00:38:46.748 } 00:38:46.748 } 00:38:46.748 ] 00:38:46.748 }, 00:38:46.748 { 00:38:46.748 "subsystem": "sock", 00:38:46.748 "config": [ 00:38:46.748 { 00:38:46.748 "method": "sock_set_default_impl", 00:38:46.748 "params": { 00:38:46.748 "impl_name": "uring" 00:38:46.748 } 00:38:46.748 }, 00:38:46.748 { 00:38:46.748 "method": "sock_impl_set_options", 00:38:46.748 "params": { 00:38:46.748 "impl_name": "ssl", 00:38:46.748 "recv_buf_size": 4096, 00:38:46.748 "send_buf_size": 4096, 00:38:46.748 "enable_recv_pipe": true, 00:38:46.748 "enable_quickack": false, 00:38:46.748 "enable_placement_id": 0, 00:38:46.748 "enable_zerocopy_send_server": true, 00:38:46.748 "enable_zerocopy_send_client": false, 00:38:46.748 "zerocopy_threshold": 0, 00:38:46.748 "tls_version": 0, 00:38:46.748 "enable_ktls": false 00:38:46.748 } 00:38:46.748 }, 00:38:46.748 { 00:38:46.748 "method": "sock_impl_set_options", 00:38:46.748 "params": { 00:38:46.748 "impl_name": "posix", 00:38:46.748 "recv_buf_size": 2097152, 00:38:46.748 "send_buf_size": 2097152, 00:38:46.748 "enable_recv_pipe": true, 00:38:46.748 "enable_quickack": false, 00:38:46.748 "enable_placement_id": 0, 00:38:46.748 "enable_zerocopy_send_server": true, 00:38:46.748 "enable_zerocopy_send_client": false, 00:38:46.748 "zerocopy_threshold": 0, 00:38:46.748 "tls_version": 0, 00:38:46.748 "enable_ktls": false 00:38:46.748 } 00:38:46.748 }, 00:38:46.748 { 00:38:46.748 "method": "sock_impl_set_options", 00:38:46.748 "params": { 00:38:46.748 "impl_name": "uring", 00:38:46.748 "recv_buf_size": 2097152, 00:38:46.748 "send_buf_size": 2097152, 00:38:46.748 "enable_recv_pipe": true, 00:38:46.748 "enable_quickack": false, 00:38:46.748 "enable_placement_id": 0, 00:38:46.748 "enable_zerocopy_send_server": false, 00:38:46.748 "enable_zerocopy_send_client": false, 00:38:46.748 "zerocopy_threshold": 0, 00:38:46.748 "tls_version": 0, 00:38:46.748 "enable_ktls": false 00:38:46.748 } 00:38:46.748 } 00:38:46.748 ] 00:38:46.748 }, 00:38:46.748 { 00:38:46.748 "subsystem": "vmd", 00:38:46.748 "config": [] 00:38:46.748 }, 00:38:46.748 { 00:38:46.748 "subsystem": "accel", 00:38:46.748 "config": [ 00:38:46.748 { 00:38:46.748 "method": "accel_set_options", 00:38:46.748 "params": { 00:38:46.748 "small_cache_size": 128, 00:38:46.748 "large_cache_size": 16, 00:38:46.748 "task_count": 2048, 00:38:46.748 "sequence_count": 2048, 00:38:46.748 "buf_count": 2048 00:38:46.748 } 00:38:46.748 } 00:38:46.748 ] 00:38:46.748 }, 00:38:46.748 { 00:38:46.748 "subsystem": "bdev", 00:38:46.748 "config": [ 00:38:46.748 { 00:38:46.748 "method": "bdev_set_options", 00:38:46.748 "params": { 00:38:46.748 "bdev_io_pool_size": 65535, 00:38:46.748 "bdev_io_cache_size": 256, 00:38:46.748 "bdev_auto_examine": true, 00:38:46.748 "iobuf_small_cache_size": 128, 00:38:46.748 "iobuf_large_cache_size": 16 00:38:46.748 } 00:38:46.748 }, 00:38:46.748 { 00:38:46.748 "method": "bdev_raid_set_options", 00:38:46.748 "params": { 00:38:46.748 "process_window_size_kb": 1024, 00:38:46.748 "process_max_bandwidth_mb_sec": 0 00:38:46.748 } 00:38:46.748 }, 00:38:46.748 { 00:38:46.748 "method": "bdev_iscsi_set_options", 00:38:46.748 "params": { 00:38:46.748 "timeout_sec": 30 00:38:46.748 } 00:38:46.748 }, 00:38:46.748 { 00:38:46.748 "method": "bdev_nvme_set_options", 00:38:46.748 "params": { 00:38:46.748 "action_on_timeout": "none", 00:38:46.748 "timeout_us": 0, 00:38:46.748 "timeout_admin_us": 0, 00:38:46.748 "keep_alive_timeout_ms": 10000, 00:38:46.748 "arbitration_burst": 0, 00:38:46.748 "low_priority_weight": 0, 00:38:46.748 "medium_priority_weight": 0, 00:38:46.748 "high_priority_weight": 0, 00:38:46.748 "nvme_adminq_poll_period_us": 10000, 00:38:46.748 "nvme_ioq_poll_period_us": 0, 00:38:46.748 "io_queue_requests": 512, 00:38:46.748 "delay_cmd_submit": true, 00:38:46.748 "transport_retry_count": 4, 00:38:46.748 "bdev_retry_count": 3, 00:38:46.748 "transport_ack_timeout": 0, 00:38:46.748 "ctrlr_loss_timeout_sec": 0, 00:38:46.749 "reconnect_delay_sec": 0, 00:38:46.749 "fast_io_fail_timeout_sec": 0, 00:38:46.749 "disable_auto_failback": false, 00:38:46.749 "generate_uuids": false, 00:38:46.749 "transport_tos": 0, 00:38:46.749 "nvme_error_stat": false, 00:38:46.749 "rdma_srq_size": 0, 00:38:46.749 "io_path_stat": false, 00:38:46.749 "allow_accel_sequence": false, 00:38:46.749 "rdma_max_cq_size": 0, 00:38:46.749 "rdma_cm_event_timeout_ms": 0, 00:38:46.749 "dhchap_digests": [ 00:38:46.749 "sha256", 00:38:46.749 "sha384", 00:38:46.749 "sha512" 00:38:46.749 ], 00:38:46.749 "dhchap_dhgroups": [ 00:38:46.749 "null", 00:38:46.749 "ffdhe2048", 00:38:46.749 "ffdhe3072", 00:38:46.749 "ffdhe4096", 00:38:46.749 "ffdhe6144", 00:38:46.749 "ffdhe8192" 00:38:46.749 ] 00:38:46.749 } 00:38:46.749 }, 00:38:46.749 { 00:38:46.749 "method": "bdev_nvme_attach_controller", 00:38:46.749 "params": { 00:38:46.749 "name": "nvme0", 00:38:46.749 "trtype": "TCP", 00:38:46.749 "adrfam": "IPv4", 00:38:46.749 "traddr": "10.0.0.3", 00:38:46.749 "trsvcid": "4420", 00:38:46.749 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:46.749 "prchk_reftag": false, 00:38:46.749 "prchk_guard": false, 00:38:46.749 "ctrlr_loss_timeout_sec": 0, 00:38:46.749 "reconnect_delay_sec": 0, 00:38:46.749 "fast_io_fail_timeout_sec": 0, 00:38:46.749 "psk": "key0", 00:38:46.749 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:46.749 "hdgst": false, 00:38:46.749 "ddgst": false, 00:38:46.749 "multipath": "multipath" 00:38:46.749 } 00:38:46.749 }, 00:38:46.749 { 00:38:46.749 "method": "bdev_nvme_set_hotplug", 00:38:46.749 "params": { 00:38:46.749 "period_us": 100000, 00:38:46.749 "enable": false 00:38:46.749 } 00:38:46.749 }, 00:38:46.749 { 00:38:46.749 "method": "bdev_enable_histogram", 00:38:46.749 "params": { 00:38:46.749 "name": "nvme0n1", 00:38:46.749 "enable": true 00:38:46.749 } 00:38:46.749 }, 00:38:46.749 { 00:38:46.749 "method": "bdev_wait_for_examine" 00:38:46.749 } 00:38:46.749 ] 00:38:46.749 }, 00:38:46.749 { 00:38:46.749 "subsystem": "nbd", 00:38:46.749 "config": [] 00:38:46.749 } 00:38:46.749 ] 00:38:46.749 }' 00:38:46.749 13:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 72365 00:38:46.749 13:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72365 ']' 00:38:46.749 13:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72365 00:38:46.749 13:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:38:46.749 13:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:46.749 13:59:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72365 00:38:46.749 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:46.749 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:46.749 killing process with pid 72365 00:38:46.749 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72365' 00:38:46.749 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72365 00:38:46.749 Received shutdown signal, test time was about 1.000000 seconds 00:38:46.749 00:38:46.749 Latency(us) 00:38:46.749 [2024-11-20T13:59:44.072Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:46.749 [2024-11-20T13:59:44.072Z] =================================================================================================================== 00:38:46.749 [2024-11-20T13:59:44.072Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:46.749 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72365 00:38:47.009 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 72334 00:38:47.010 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72334 ']' 00:38:47.010 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72334 00:38:47.010 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:38:47.010 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:47.010 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72334 00:38:47.010 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:47.010 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:47.010 killing process with pid 72334 00:38:47.010 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72334' 00:38:47.010 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72334 00:38:47.010 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72334 00:38:47.270 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:38:47.270 "subsystems": [ 00:38:47.270 { 00:38:47.270 "subsystem": "keyring", 00:38:47.270 "config": [ 00:38:47.270 { 00:38:47.270 "method": "keyring_file_add_key", 00:38:47.270 "params": { 00:38:47.270 "name": "key0", 00:38:47.270 "path": "/tmp/tmp.XijMJyf6Ks" 00:38:47.270 } 00:38:47.270 } 00:38:47.270 ] 00:38:47.270 }, 00:38:47.270 { 00:38:47.270 "subsystem": "iobuf", 00:38:47.270 "config": [ 00:38:47.270 { 00:38:47.270 "method": "iobuf_set_options", 00:38:47.270 "params": { 00:38:47.270 "small_pool_count": 8192, 00:38:47.270 "large_pool_count": 1024, 00:38:47.270 "small_bufsize": 8192, 00:38:47.270 "large_bufsize": 135168, 00:38:47.270 "enable_numa": false 00:38:47.270 } 00:38:47.270 } 00:38:47.270 ] 00:38:47.270 }, 00:38:47.270 { 00:38:47.270 "subsystem": "sock", 00:38:47.270 "config": [ 00:38:47.270 { 00:38:47.270 "method": "sock_set_default_impl", 00:38:47.270 "params": { 00:38:47.270 "impl_name": "uring" 00:38:47.270 } 00:38:47.270 }, 00:38:47.270 { 00:38:47.270 "method": "sock_impl_set_options", 00:38:47.270 "params": { 00:38:47.270 "impl_name": "ssl", 00:38:47.270 "recv_buf_size": 4096, 00:38:47.270 "send_buf_size": 4096, 00:38:47.270 "enable_recv_pipe": true, 00:38:47.270 "enable_quickack": false, 00:38:47.270 "enable_placement_id": 0, 00:38:47.270 "enable_zerocopy_send_server": true, 00:38:47.270 "enable_zerocopy_send_client": false, 00:38:47.270 "zerocopy_threshold": 0, 00:38:47.270 "tls_version": 0, 00:38:47.271 "enable_ktls": false 00:38:47.271 } 00:38:47.271 }, 00:38:47.271 { 00:38:47.271 "method": "sock_impl_set_options", 00:38:47.271 "params": { 00:38:47.271 "impl_name": "posix", 00:38:47.271 "recv_buf_size": 2097152, 00:38:47.271 "send_buf_size": 2097152, 00:38:47.271 "enable_recv_pipe": true, 00:38:47.271 "enable_quickack": false, 00:38:47.271 "enable_placement_id": 0, 00:38:47.271 "enable_zerocopy_send_server": true, 00:38:47.271 "enable_zerocopy_send_client": false, 00:38:47.271 "zerocopy_threshold": 0, 00:38:47.271 "tls_version": 0, 00:38:47.271 "enable_ktls": false 00:38:47.271 } 00:38:47.271 }, 00:38:47.271 { 00:38:47.271 "method": "sock_impl_set_options", 00:38:47.271 "params": { 00:38:47.271 "impl_name": "uring", 00:38:47.271 "recv_buf_size": 2097152, 00:38:47.271 "send_buf_size": 2097152, 00:38:47.271 "enable_recv_pipe": true, 00:38:47.271 "enable_quickack": false, 00:38:47.271 "enable_placement_id": 0, 00:38:47.271 "enable_zerocopy_send_server": false, 00:38:47.271 "enable_zerocopy_send_client": false, 00:38:47.271 "zerocopy_threshold": 0, 00:38:47.271 "tls_version": 0, 00:38:47.271 "enable_ktls": false 00:38:47.271 } 00:38:47.271 } 00:38:47.271 ] 00:38:47.271 }, 00:38:47.271 { 00:38:47.271 "subsystem": "vmd", 00:38:47.271 "config": [] 00:38:47.271 }, 00:38:47.271 { 00:38:47.271 "subsystem": "accel", 00:38:47.271 "config": [ 00:38:47.271 { 00:38:47.271 "method": "accel_set_options", 00:38:47.271 "params": { 00:38:47.271 "small_cache_size": 128, 00:38:47.271 "large_cache_size": 16, 00:38:47.271 "task_count": 2048, 00:38:47.271 "sequence_count": 2048, 00:38:47.271 "buf_count": 2048 00:38:47.271 } 00:38:47.271 } 00:38:47.271 ] 00:38:47.271 }, 00:38:47.271 { 00:38:47.271 "subsystem": "bdev", 00:38:47.271 "config": [ 00:38:47.271 { 00:38:47.271 "method": "bdev_set_options", 00:38:47.271 "params": { 00:38:47.271 "bdev_io_pool_size": 65535, 00:38:47.271 "bdev_io_cache_size": 256, 00:38:47.271 "bdev_auto_examine": true, 00:38:47.271 "iobuf_small_cache_size": 128, 00:38:47.271 "iobuf_large_cache_size": 16 00:38:47.271 } 00:38:47.271 }, 00:38:47.271 { 00:38:47.271 "method": "bdev_raid_set_options", 00:38:47.271 "params": { 00:38:47.271 "process_window_size_kb": 1024, 00:38:47.271 "process_max_bandwidth_mb_sec": 0 00:38:47.271 } 00:38:47.271 }, 00:38:47.271 { 00:38:47.271 "method": "bdev_iscsi_set_options", 00:38:47.271 "params": { 00:38:47.271 "timeout_sec": 30 00:38:47.271 } 00:38:47.271 }, 00:38:47.271 { 00:38:47.271 "method": "bdev_nvme_set_options", 00:38:47.271 "params": { 00:38:47.271 "action_on_timeout": "none", 00:38:47.271 "timeout_us": 0, 00:38:47.271 "timeout_admin_us": 0, 00:38:47.271 "keep_alive_timeout_ms": 10000, 00:38:47.271 "arbitration_burst": 0, 00:38:47.271 "low_priority_weight": 0, 00:38:47.271 "medium_priority_weight": 0, 00:38:47.271 "high_priority_weight": 0, 00:38:47.271 "nvme_adminq_poll_period_us": 10000, 00:38:47.271 "nvme_ioq_poll_period_us": 0, 00:38:47.271 "io_queue_requests": 0, 00:38:47.271 "delay_cmd_submit": true, 00:38:47.271 "transport_retry_count": 4, 00:38:47.271 "bdev_retry_count": 3, 00:38:47.271 "transport_ack_timeout": 0, 00:38:47.271 "ctrlr_loss_timeout_sec": 0, 00:38:47.271 "reconnect_delay_sec": 0, 00:38:47.271 "fast_io_fail_timeout_sec": 0, 00:38:47.271 "disable_auto_failback": false, 00:38:47.271 "generate_uuids": false, 00:38:47.271 "transport_tos": 0, 00:38:47.271 "nvme_error_stat": false, 00:38:47.271 "rdma_srq_size": 0, 00:38:47.271 "io_path_stat": false, 00:38:47.271 "allow_accel_sequence": false, 00:38:47.271 "rdma_max_cq_size": 0, 00:38:47.271 "rdma_cm_event_timeout_ms": 0, 00:38:47.271 "dhchap_digests": [ 00:38:47.271 "sha256", 00:38:47.271 "sha384", 00:38:47.271 "sha512" 00:38:47.271 ], 00:38:47.271 "dhchap_dhgroups": [ 00:38:47.271 "null", 00:38:47.271 "ffdhe2048", 00:38:47.271 "ffdhe3072", 00:38:47.271 "ffdhe4096", 00:38:47.271 "ffdhe6144", 00:38:47.271 "ffdhe8192" 00:38:47.271 ] 00:38:47.271 } 00:38:47.271 }, 00:38:47.271 { 00:38:47.271 "method": "bdev_nvme_set_hotplug", 00:38:47.271 "params": { 00:38:47.271 "period_us": 100000, 00:38:47.271 "enable": false 00:38:47.271 } 00:38:47.271 }, 00:38:47.271 { 00:38:47.271 "method": "bdev_malloc_create", 00:38:47.271 "params": { 00:38:47.271 "name": "malloc0", 00:38:47.271 "num_blocks": 8192, 00:38:47.271 "block_size": 4096, 00:38:47.271 "physical_block_size": 4096, 00:38:47.271 "uuid": "9c162518-4743-4814-965a-f559a1fe1a7a", 00:38:47.271 "optimal_io_boundary": 0, 00:38:47.271 "md_size": 0, 00:38:47.271 "dif_type": 0, 00:38:47.271 "dif_is_head_of_md": false, 00:38:47.271 "dif_pi_format": 0 00:38:47.271 } 00:38:47.271 }, 00:38:47.271 { 00:38:47.271 "method": "bdev_wait_for_examine" 00:38:47.271 } 00:38:47.271 ] 00:38:47.271 }, 00:38:47.271 { 00:38:47.271 "subsystem": "nbd", 00:38:47.271 "config": [] 00:38:47.271 }, 00:38:47.271 { 00:38:47.271 "subsystem": "scheduler", 00:38:47.271 "config": [ 00:38:47.271 { 00:38:47.271 "method": "framework_set_scheduler", 00:38:47.271 "params": { 00:38:47.271 "name": "static" 00:38:47.271 } 00:38:47.271 } 00:38:47.271 ] 00:38:47.271 }, 00:38:47.271 { 00:38:47.271 "subsystem": "nvmf", 00:38:47.271 "config": [ 00:38:47.271 { 00:38:47.271 "method": "nvmf_set_config", 00:38:47.271 "params": { 00:38:47.271 "discovery_filter": "match_any", 00:38:47.271 "admin_cmd_passthru": { 00:38:47.271 "identify_ctrlr": false 00:38:47.271 }, 00:38:47.271 "dhchap_digests": [ 00:38:47.271 "sha256", 00:38:47.271 "sha384", 00:38:47.271 "sha512" 00:38:47.271 ], 00:38:47.271 "dhchap_dhgroups": [ 00:38:47.271 "null", 00:38:47.271 "ffdhe2048", 00:38:47.271 "ffdhe3072", 00:38:47.271 "ffdhe4096", 00:38:47.271 "ffdhe6144", 00:38:47.271 "ffdhe8192" 00:38:47.271 ] 00:38:47.271 } 00:38:47.271 }, 00:38:47.271 { 00:38:47.271 "method": "nvmf_set_max_subsystems", 00:38:47.271 "params": { 00:38:47.271 "max_subsystems": 1024 00:38:47.272 } 00:38:47.272 }, 00:38:47.272 { 00:38:47.272 "method": "nvmf_set_crdt", 00:38:47.272 "params": { 00:38:47.272 "crdt1": 0, 00:38:47.272 "crdt2": 0, 00:38:47.272 "crdt3": 0 00:38:47.272 } 00:38:47.272 }, 00:38:47.272 { 00:38:47.272 "method": "nvmf_create_transport", 00:38:47.272 "params": { 00:38:47.272 "trtype": "TCP", 00:38:47.272 "max_queue_depth": 128, 00:38:47.272 "max_io_qpairs_per_ctrlr": 127, 00:38:47.272 "in_capsule_data_size": 4096, 00:38:47.272 "max_io_size": 131072, 00:38:47.272 "io_unit_size": 131072, 00:38:47.272 "max_aq_depth": 128, 00:38:47.272 "num_shared_buffers": 511, 00:38:47.272 "buf_cache_size": 4294967295, 00:38:47.272 "dif_insert_or_strip": false, 00:38:47.272 "zcopy": false, 00:38:47.272 "c2h_success": false, 00:38:47.272 "sock_priority": 0, 00:38:47.272 "abort_timeout_sec": 1, 00:38:47.272 "ack_timeout": 0, 00:38:47.272 "data_wr_pool_size": 0 00:38:47.272 } 00:38:47.272 }, 00:38:47.272 { 00:38:47.272 "method": "nvmf_create_subsystem", 00:38:47.272 "params": { 00:38:47.272 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:38:47.272 "allow_any_host": false, 00:38:47.272 "serial_number": "00000000000000000000", 00:38:47.272 "model_number": "SPDK bdev Controller", 00:38:47.272 "max_namespaces": 32, 00:38:47.272 "min_cntlid": 1, 00:38:47.272 "max_cntlid": 65519, 00:38:47.272 "ana_reporting": false 00:38:47.272 } 00:38:47.272 }, 00:38:47.272 { 00:38:47.272 "method": "nvmf_subsystem_add_host", 00:38:47.272 "params": { 00:38:47.272 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:38:47.272 "host": "nqn.2016-06.io.spdk:host1", 00:38:47.272 "psk": "key0" 00:38:47.272 } 00:38:47.272 }, 00:38:47.272 { 00:38:47.272 "method": "nvmf_subsystem_add_ns", 00:38:47.272 "params": { 00:38:47.272 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:38:47.272 "namespace": { 00:38:47.272 "nsid": 1, 00:38:47.272 "bdev_name": "malloc0", 00:38:47.272 "nguid": "9C16251847434814965AF559A1FE1A7A", 00:38:47.272 "uuid": "9c162518-4743-4814-965a-f559a1fe1a7a", 00:38:47.272 "no_auto_visible": false 00:38:47.272 } 00:38:47.272 } 00:38:47.272 }, 00:38:47.272 { 00:38:47.272 "method": "nvmf_subsystem_add_listener", 00:38:47.272 "params": { 00:38:47.272 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:38:47.272 "listen_address": { 00:38:47.272 "trtype": "TCP", 00:38:47.272 "adrfam": "IPv4", 00:38:47.272 "traddr": "10.0.0.3", 00:38:47.272 "trsvcid": "4420" 00:38:47.272 }, 00:38:47.272 "secure_channel": false, 00:38:47.272 "sock_impl": "ssl" 00:38:47.272 } 00:38:47.272 } 00:38:47.272 ] 00:38:47.272 } 00:38:47.272 ] 00:38:47.272 }' 00:38:47.272 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:38:47.272 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:47.272 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:47.272 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:38:47.272 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:38:47.272 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72421 00:38:47.272 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72421 00:38:47.272 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72421 ']' 00:38:47.272 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:47.272 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:47.272 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:47.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:47.272 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:47.272 13:59:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:38:47.532 [2024-11-20 13:59:44.633827] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:38:47.532 [2024-11-20 13:59:44.633894] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:47.532 [2024-11-20 13:59:44.785021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:47.532 [2024-11-20 13:59:44.850443] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:47.532 [2024-11-20 13:59:44.850490] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:47.532 [2024-11-20 13:59:44.850498] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:47.532 [2024-11-20 13:59:44.850503] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:47.532 [2024-11-20 13:59:44.850508] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:47.532 [2024-11-20 13:59:44.850909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:47.792 [2024-11-20 13:59:45.031196] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:48.053 [2024-11-20 13:59:45.120166] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:48.053 [2024-11-20 13:59:45.152013] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:48.053 [2024-11-20 13:59:45.152225] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:38:48.314 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:48.314 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:38:48.314 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:48.314 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:48.314 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:38:48.314 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:48.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:48.314 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=72453 00:38:48.314 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 72453 /var/tmp/bdevperf.sock 00:38:48.314 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72453 ']' 00:38:48.314 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:48.314 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:48.314 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:48.314 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:38:48.314 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:48.314 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:38:48.314 13:59:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:38:48.314 "subsystems": [ 00:38:48.314 { 00:38:48.314 "subsystem": "keyring", 00:38:48.314 "config": [ 00:38:48.314 { 00:38:48.314 "method": "keyring_file_add_key", 00:38:48.314 "params": { 00:38:48.314 "name": "key0", 00:38:48.314 "path": "/tmp/tmp.XijMJyf6Ks" 00:38:48.314 } 00:38:48.314 } 00:38:48.314 ] 00:38:48.314 }, 00:38:48.314 { 00:38:48.314 "subsystem": "iobuf", 00:38:48.314 "config": [ 00:38:48.314 { 00:38:48.314 "method": "iobuf_set_options", 00:38:48.314 "params": { 00:38:48.314 "small_pool_count": 8192, 00:38:48.314 "large_pool_count": 1024, 00:38:48.314 "small_bufsize": 8192, 00:38:48.314 "large_bufsize": 135168, 00:38:48.314 "enable_numa": false 00:38:48.314 } 00:38:48.314 } 00:38:48.314 ] 00:38:48.314 }, 00:38:48.314 { 00:38:48.314 "subsystem": "sock", 00:38:48.314 "config": [ 00:38:48.314 { 00:38:48.314 "method": "sock_set_default_impl", 00:38:48.314 "params": { 00:38:48.314 "impl_name": "uring" 00:38:48.314 } 00:38:48.314 }, 00:38:48.314 { 00:38:48.314 "method": "sock_impl_set_options", 00:38:48.314 "params": { 00:38:48.314 "impl_name": "ssl", 00:38:48.314 "recv_buf_size": 4096, 00:38:48.314 "send_buf_size": 4096, 00:38:48.314 "enable_recv_pipe": true, 00:38:48.314 "enable_quickack": false, 00:38:48.314 "enable_placement_id": 0, 00:38:48.314 "enable_zerocopy_send_server": true, 00:38:48.314 "enable_zerocopy_send_client": false, 00:38:48.314 "zerocopy_threshold": 0, 00:38:48.314 "tls_version": 0, 00:38:48.314 "enable_ktls": false 00:38:48.314 } 00:38:48.314 }, 00:38:48.314 { 00:38:48.314 "method": "sock_impl_set_options", 00:38:48.314 "params": { 00:38:48.314 "impl_name": "posix", 00:38:48.314 "recv_buf_size": 2097152, 00:38:48.314 "send_buf_size": 2097152, 00:38:48.314 "enable_recv_pipe": true, 00:38:48.314 "enable_quickack": false, 00:38:48.314 "enable_placement_id": 0, 00:38:48.314 "enable_zerocopy_send_server": true, 00:38:48.314 "enable_zerocopy_send_client": false, 00:38:48.314 "zerocopy_threshold": 0, 00:38:48.314 "tls_version": 0, 00:38:48.314 "enable_ktls": false 00:38:48.314 } 00:38:48.314 }, 00:38:48.314 { 00:38:48.314 "method": "sock_impl_set_options", 00:38:48.314 "params": { 00:38:48.314 "impl_name": "uring", 00:38:48.314 "recv_buf_size": 2097152, 00:38:48.314 "send_buf_size": 2097152, 00:38:48.314 "enable_recv_pipe": true, 00:38:48.314 "enable_quickack": false, 00:38:48.314 "enable_placement_id": 0, 00:38:48.314 "enable_zerocopy_send_server": false, 00:38:48.314 "enable_zerocopy_send_client": false, 00:38:48.314 "zerocopy_threshold": 0, 00:38:48.314 "tls_version": 0, 00:38:48.314 "enable_ktls": false 00:38:48.314 } 00:38:48.314 } 00:38:48.314 ] 00:38:48.314 }, 00:38:48.314 { 00:38:48.314 "subsystem": "vmd", 00:38:48.314 "config": [] 00:38:48.314 }, 00:38:48.314 { 00:38:48.314 "subsystem": "accel", 00:38:48.314 "config": [ 00:38:48.314 { 00:38:48.314 "method": "accel_set_options", 00:38:48.314 "params": { 00:38:48.314 "small_cache_size": 128, 00:38:48.314 "large_cache_size": 16, 00:38:48.314 "task_count": 2048, 00:38:48.314 "sequence_count": 2048, 00:38:48.314 "buf_count": 2048 00:38:48.314 } 00:38:48.314 } 00:38:48.314 ] 00:38:48.314 }, 00:38:48.314 { 00:38:48.314 "subsystem": "bdev", 00:38:48.314 "config": [ 00:38:48.314 { 00:38:48.314 "method": "bdev_set_options", 00:38:48.314 "params": { 00:38:48.314 "bdev_io_pool_size": 65535, 00:38:48.314 "bdev_io_cache_size": 256, 00:38:48.314 "bdev_auto_examine": true, 00:38:48.314 "iobuf_small_cache_size": 128, 00:38:48.314 "iobuf_large_cache_size": 16 00:38:48.314 } 00:38:48.314 }, 00:38:48.314 { 00:38:48.314 "method": "bdev_raid_set_options", 00:38:48.314 "params": { 00:38:48.314 "process_window_size_kb": 1024, 00:38:48.314 "process_max_bandwidth_mb_sec": 0 00:38:48.314 } 00:38:48.314 }, 00:38:48.314 { 00:38:48.314 "method": "bdev_iscsi_set_options", 00:38:48.314 "params": { 00:38:48.314 "timeout_sec": 30 00:38:48.314 } 00:38:48.314 }, 00:38:48.314 { 00:38:48.314 "method": "bdev_nvme_set_options", 00:38:48.314 "params": { 00:38:48.314 "action_on_timeout": "none", 00:38:48.314 "timeout_us": 0, 00:38:48.314 "timeout_admin_us": 0, 00:38:48.314 "keep_alive_timeout_ms": 10000, 00:38:48.314 "arbitration_burst": 0, 00:38:48.314 "low_priority_weight": 0, 00:38:48.314 "medium_priority_weight": 0, 00:38:48.314 "high_priority_weight": 0, 00:38:48.314 "nvme_adminq_poll_period_us": 10000, 00:38:48.314 "nvme_ioq_poll_period_us": 0, 00:38:48.314 "io_queue_requests": 512, 00:38:48.314 "delay_cmd_submit": true, 00:38:48.314 "transport_retry_count": 4, 00:38:48.314 "bdev_retry_count": 3, 00:38:48.314 "transport_ack_timeout": 0, 00:38:48.314 "ctrlr_loss_timeout_sec": 0, 00:38:48.314 "reconnect_delay_sec": 0, 00:38:48.314 "fast_io_fail_timeout_sec": 0, 00:38:48.314 "disable_auto_failback": false, 00:38:48.314 "generate_uuids": false, 00:38:48.314 "transport_tos": 0, 00:38:48.314 "nvme_error_stat": false, 00:38:48.314 "rdma_srq_size": 0, 00:38:48.314 "io_path_stat": false, 00:38:48.314 "allow_accel_sequence": false, 00:38:48.314 "rdma_max_cq_size": 0, 00:38:48.314 "rdma_cm_event_timeout_ms": 0, 00:38:48.314 "dhchap_digests": [ 00:38:48.314 "sha256", 00:38:48.314 "sha384", 00:38:48.314 "sha512" 00:38:48.314 ], 00:38:48.314 "dhchap_dhgroups": [ 00:38:48.314 "null", 00:38:48.314 "ffdhe2048", 00:38:48.314 "ffdhe3072", 00:38:48.314 "ffdhe4096", 00:38:48.314 "ffdhe6144", 00:38:48.314 "ffdhe8192" 00:38:48.314 ] 00:38:48.314 } 00:38:48.314 }, 00:38:48.314 { 00:38:48.314 "method": "bdev_nvme_attach_controller", 00:38:48.314 "params": { 00:38:48.314 "name": "nvme0", 00:38:48.314 "trtype": "TCP", 00:38:48.314 "adrfam": "IPv4", 00:38:48.314 "traddr": "10.0.0.3", 00:38:48.314 "trsvcid": "4420", 00:38:48.314 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:48.314 "prchk_reftag": false, 00:38:48.315 "prchk_guard": false, 00:38:48.315 "ctrlr_loss_timeout_sec": 0, 00:38:48.315 "reconnect_delay_sec": 0, 00:38:48.315 "fast_io_fail_timeout_sec": 0, 00:38:48.315 "psk": "key0", 00:38:48.315 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:48.315 "hdgst": false, 00:38:48.315 "ddgst": false, 00:38:48.315 "multipath": "multipath" 00:38:48.315 } 00:38:48.315 }, 00:38:48.315 { 00:38:48.315 "method": "bdev_nvme_set_hotplug", 00:38:48.315 "params": { 00:38:48.315 "period_us": 100000, 00:38:48.315 "enable": false 00:38:48.315 } 00:38:48.315 }, 00:38:48.315 { 00:38:48.315 "method": "bdev_enable_histogram", 00:38:48.315 "params": { 00:38:48.315 "name": "nvme0n1", 00:38:48.315 "enable": true 00:38:48.315 } 00:38:48.315 }, 00:38:48.315 { 00:38:48.315 "method": "bdev_wait_for_examine" 00:38:48.315 } 00:38:48.315 ] 00:38:48.315 }, 00:38:48.315 { 00:38:48.315 "subsystem": "nbd", 00:38:48.315 "config": [] 00:38:48.315 } 00:38:48.315 ] 00:38:48.315 }' 00:38:48.576 [2024-11-20 13:59:45.650249] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:38:48.576 [2024-11-20 13:59:45.650320] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72453 ] 00:38:48.576 [2024-11-20 13:59:45.798796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:48.576 [2024-11-20 13:59:45.873097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:48.836 [2024-11-20 13:59:46.027833] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:48.836 [2024-11-20 13:59:46.085228] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:49.406 13:59:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:49.406 13:59:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:38:49.406 13:59:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:38:49.406 13:59:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:38:49.666 13:59:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:49.666 13:59:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:49.666 Running I/O for 1 seconds... 00:38:50.606 5868.00 IOPS, 22.92 MiB/s 00:38:50.606 Latency(us) 00:38:50.606 [2024-11-20T13:59:47.929Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:50.606 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:38:50.606 Verification LBA range: start 0x0 length 0x2000 00:38:50.606 nvme0n1 : 1.01 5924.50 23.14 0.00 0.00 21452.77 4407.22 19231.52 00:38:50.606 [2024-11-20T13:59:47.929Z] =================================================================================================================== 00:38:50.606 [2024-11-20T13:59:47.929Z] Total : 5924.50 23.14 0.00 0.00 21452.77 4407.22 19231.52 00:38:50.606 { 00:38:50.606 "results": [ 00:38:50.606 { 00:38:50.606 "job": "nvme0n1", 00:38:50.606 "core_mask": "0x2", 00:38:50.606 "workload": "verify", 00:38:50.606 "status": "finished", 00:38:50.606 "verify_range": { 00:38:50.606 "start": 0, 00:38:50.606 "length": 8192 00:38:50.606 }, 00:38:50.606 "queue_depth": 128, 00:38:50.606 "io_size": 4096, 00:38:50.606 "runtime": 1.012068, 00:38:50.606 "iops": 5924.503096629871, 00:38:50.606 "mibps": 23.142590221210433, 00:38:50.606 "io_failed": 0, 00:38:50.606 "io_timeout": 0, 00:38:50.606 "avg_latency_us": 21452.768230348618, 00:38:50.606 "min_latency_us": 4407.2244541484715, 00:38:50.606 "max_latency_us": 19231.524890829693 00:38:50.606 } 00:38:50.606 ], 00:38:50.606 "core_count": 1 00:38:50.606 } 00:38:50.606 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:38:50.606 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:38:50.606 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:38:50.606 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:38:50.606 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:38:50.606 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:38:50.606 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:38:50.606 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:38:50.606 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:38:50.606 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:38:50.606 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:38:50.606 nvmf_trace.0 00:38:50.866 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:38:50.866 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 72453 00:38:50.866 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72453 ']' 00:38:50.866 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72453 00:38:50.866 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:38:50.866 13:59:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:50.866 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72453 00:38:50.866 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:50.866 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:50.866 killing process with pid 72453 00:38:50.866 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72453' 00:38:50.866 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72453 00:38:50.866 Received shutdown signal, test time was about 1.000000 seconds 00:38:50.866 00:38:50.866 Latency(us) 00:38:50.866 [2024-11-20T13:59:48.189Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:50.866 [2024-11-20T13:59:48.189Z] =================================================================================================================== 00:38:50.866 [2024-11-20T13:59:48.189Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:50.866 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72453 00:38:51.127 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:38:51.127 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:51.127 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:38:51.127 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:51.127 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:38:51.127 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:51.127 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:51.127 rmmod nvme_tcp 00:38:51.127 rmmod nvme_fabrics 00:38:51.127 rmmod nvme_keyring 00:38:51.127 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:51.127 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:38:51.127 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:38:51.127 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 72421 ']' 00:38:51.127 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 72421 00:38:51.127 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72421 ']' 00:38:51.127 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72421 00:38:51.127 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:38:51.127 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:51.127 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72421 00:38:51.386 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:51.386 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:51.386 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72421' 00:38:51.386 killing process with pid 72421 00:38:51.386 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72421 00:38:51.386 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72421 00:38:51.645 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:51.645 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:51.645 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:51.645 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:38:51.645 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:38:51.645 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:38:51.645 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:51.645 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:51.645 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:38:51.645 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:38:51.645 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:38:51.645 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:38:51.645 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:38:51.645 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:38:51.645 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:38:51.645 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:38:51.645 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:38:51.645 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:38:51.645 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:38:51.645 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:38:51.645 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:38:51.645 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:38:51.904 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:38:51.904 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:51.904 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:51.904 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:51.904 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:38:51.904 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.tMddQRRkIc /tmp/tmp.VbRkT9VRNJ /tmp/tmp.XijMJyf6Ks 00:38:51.904 00:38:51.904 real 1m28.938s 00:38:51.904 user 2m14.822s 00:38:51.904 sys 0m29.879s 00:38:51.904 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:51.904 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:38:51.904 ************************************ 00:38:51.904 END TEST nvmf_tls 00:38:51.904 ************************************ 00:38:51.904 13:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:38:51.904 13:59:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:38:51.904 13:59:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:51.904 13:59:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:38:51.904 ************************************ 00:38:51.904 START TEST nvmf_fips 00:38:51.904 ************************************ 00:38:51.904 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:38:51.904 * Looking for test storage... 00:38:51.904 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:38:51.904 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:51.904 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:38:51.904 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:52.165 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:52.165 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:52.165 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:52.165 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:52.165 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:38:52.165 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:38:52.165 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:38:52.165 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:38:52.165 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:38:52.165 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:38:52.165 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:38:52.165 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:52.165 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:38:52.165 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:38:52.165 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:52.165 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:52.165 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:38:52.165 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:38:52.165 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:52.165 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:38:52.166 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:38:52.166 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:38:52.166 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:38:52.166 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:52.166 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:38:52.166 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:38:52.166 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:52.166 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:52.166 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:38:52.166 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:52.166 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:52.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:52.166 --rc genhtml_branch_coverage=1 00:38:52.166 --rc genhtml_function_coverage=1 00:38:52.166 --rc genhtml_legend=1 00:38:52.166 --rc geninfo_all_blocks=1 00:38:52.166 --rc geninfo_unexecuted_blocks=1 00:38:52.166 00:38:52.166 ' 00:38:52.166 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:52.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:52.166 --rc genhtml_branch_coverage=1 00:38:52.166 --rc genhtml_function_coverage=1 00:38:52.166 --rc genhtml_legend=1 00:38:52.166 --rc geninfo_all_blocks=1 00:38:52.166 --rc geninfo_unexecuted_blocks=1 00:38:52.166 00:38:52.166 ' 00:38:52.166 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:52.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:52.166 --rc genhtml_branch_coverage=1 00:38:52.166 --rc genhtml_function_coverage=1 00:38:52.166 --rc genhtml_legend=1 00:38:52.166 --rc geninfo_all_blocks=1 00:38:52.166 --rc geninfo_unexecuted_blocks=1 00:38:52.166 00:38:52.166 ' 00:38:52.166 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:52.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:52.166 --rc genhtml_branch_coverage=1 00:38:52.166 --rc genhtml_function_coverage=1 00:38:52.166 --rc genhtml_legend=1 00:38:52.166 --rc geninfo_all_blocks=1 00:38:52.166 --rc geninfo_unexecuted_blocks=1 00:38:52.166 00:38:52.166 ' 00:38:52.166 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:38:52.166 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:38:52.166 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:52.166 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:52.166 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:52.166 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:52.166 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:52.166 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:52.166 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:52.166 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:52.166 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:52.166 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:52.166 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:38:52.166 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=105ec898-1662-46bd-85be-b241e399edb9 00:38:52.166 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:52.166 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:52.166 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:38:52.166 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:52.166 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:52.166 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:38:52.166 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:52.166 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:52.166 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:52.166 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:52.166 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:52.166 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:52.166 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:38:52.166 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:52.166 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:38:52.166 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:52.166 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:52.166 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:52.166 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:52.166 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:52.166 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:52.166 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:52.166 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:52.166 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:52.166 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:52.166 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:52.166 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:38:52.166 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:38:52.166 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:38:52.166 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:38:52.166 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:38:52.166 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:38:52.166 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:52.166 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:52.167 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:38:52.167 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:38:52.167 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:38:52.167 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:38:52.167 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:38:52.167 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:38:52.167 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:38:52.167 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:52.167 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:38:52.167 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:38:52.167 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:52.167 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:52.167 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:38:52.167 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:38:52.167 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:38:52.167 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:38:52.167 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:38:52.167 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:38:52.167 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:38:52.167 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:38:52.167 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:38:52.167 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:38:52.167 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:52.167 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:52.167 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:38:52.167 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:52.167 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:38:52.167 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:38:52.167 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:52.167 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:38:52.167 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:38:52.167 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:38:52.167 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:38:52.167 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:38:52.167 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:38:52.167 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:38:52.167 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:52.167 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:38:52.167 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:38:52.167 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:38:52.167 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:38:52.167 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:38:52.167 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:38:52.167 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:38:52.167 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:38:52.167 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:38:52.167 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:38:52.167 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:38:52.167 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:38:52.167 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:38:52.167 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:38:52.167 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:38:52.167 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:38:52.167 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:38:52.428 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:38:52.428 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:38:52.428 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:38:52.428 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:38:52.428 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:38:52.428 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:38:52.428 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:38:52.428 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:38:52.428 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:52.428 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:38:52.428 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:52.428 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:38:52.428 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:52.428 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:38:52.428 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:38:52.428 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:38:52.428 Error setting digest 00:38:52.428 4042CED2667F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:38:52.428 4042CED2667F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:38:52.428 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:38:52.428 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:52.428 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:52.428 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:52.428 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:38:52.428 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:52.428 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:52.428 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:52.428 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:52.428 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:52.428 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:52.428 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:52.428 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:52.428 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:38:52.428 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:38:52.428 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:38:52.428 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:38:52.428 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:38:52.428 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:38:52.428 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:52.428 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:38:52.428 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:38:52.428 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:38:52.428 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:52.428 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:38:52.428 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:38:52.429 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:38:52.429 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:38:52.429 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:38:52.429 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:38:52.429 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:52.429 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:38:52.429 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:38:52.429 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:38:52.429 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:38:52.429 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:38:52.429 Cannot find device "nvmf_init_br" 00:38:52.429 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:38:52.429 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:38:52.429 Cannot find device "nvmf_init_br2" 00:38:52.429 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:38:52.429 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:38:52.429 Cannot find device "nvmf_tgt_br" 00:38:52.429 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:38:52.429 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:38:52.429 Cannot find device "nvmf_tgt_br2" 00:38:52.429 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:38:52.429 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:38:52.429 Cannot find device "nvmf_init_br" 00:38:52.429 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:38:52.429 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:38:52.429 Cannot find device "nvmf_init_br2" 00:38:52.429 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:38:52.429 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:38:52.429 Cannot find device "nvmf_tgt_br" 00:38:52.429 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:38:52.429 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:38:52.429 Cannot find device "nvmf_tgt_br2" 00:38:52.429 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:38:52.429 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:38:52.429 Cannot find device "nvmf_br" 00:38:52.429 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:38:52.429 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:38:52.429 Cannot find device "nvmf_init_if" 00:38:52.429 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:38:52.429 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:38:52.429 Cannot find device "nvmf_init_if2" 00:38:52.429 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:38:52.429 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:38:52.689 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:52.689 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:38:52.689 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:38:52.689 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:52.689 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:38:52.689 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:38:52.689 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:38:52.689 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:38:52.689 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:38:52.689 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:38:52.689 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:38:52.689 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:38:52.689 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:38:52.689 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:38:52.689 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:38:52.689 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:38:52.689 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:38:52.689 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:38:52.689 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:38:52.689 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:38:52.689 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:38:52.689 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:38:52.689 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:38:52.689 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:38:52.689 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:38:52.689 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:38:52.689 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:38:52.689 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:38:52.689 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:38:52.689 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:38:52.689 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:38:52.690 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:38:52.690 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:38:52.690 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:38:52.690 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:38:52.690 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:38:52.690 13:59:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:38:52.690 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:38:52.950 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:38:52.950 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.211 ms 00:38:52.950 00:38:52.950 --- 10.0.0.3 ping statistics --- 00:38:52.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:52.950 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:38:52.950 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:38:52.950 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:38:52.950 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.074 ms 00:38:52.950 00:38:52.950 --- 10.0.0.4 ping statistics --- 00:38:52.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:52.950 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:38:52.950 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:38:52.950 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:52.950 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:38:52.950 00:38:52.950 --- 10.0.0.1 ping statistics --- 00:38:52.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:52.950 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:38:52.950 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:38:52.950 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:52.950 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:38:52.950 00:38:52.950 --- 10.0.0.2 ping statistics --- 00:38:52.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:52.950 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:38:52.950 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:52.950 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:38:52.950 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:52.950 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:52.950 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:52.950 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:52.950 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:52.950 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:52.950 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:52.950 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:38:52.950 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:52.950 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:52.950 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:38:52.950 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=72779 00:38:52.950 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:38:52.950 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 72779 00:38:52.950 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 72779 ']' 00:38:52.950 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:52.950 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:52.950 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:52.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:52.950 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:52.950 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:38:52.950 [2024-11-20 13:59:50.155067] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:38:52.950 [2024-11-20 13:59:50.155230] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:53.211 [2024-11-20 13:59:50.304023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:53.211 [2024-11-20 13:59:50.354504] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:53.211 [2024-11-20 13:59:50.354548] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:53.211 [2024-11-20 13:59:50.354554] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:53.211 [2024-11-20 13:59:50.354559] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:53.211 [2024-11-20 13:59:50.354562] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:53.211 [2024-11-20 13:59:50.354902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:53.211 [2024-11-20 13:59:50.399261] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:53.781 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:53.781 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:38:53.781 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:53.781 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:53.781 13:59:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:38:53.781 13:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:53.781 13:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:38:53.781 13:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:38:53.781 13:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:38:53.781 13:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.rRY 00:38:53.781 13:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:38:53.781 13:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.rRY 00:38:53.781 13:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.rRY 00:38:53.781 13:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.rRY 00:38:53.781 13:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:54.041 [2024-11-20 13:59:51.242661] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:54.041 [2024-11-20 13:59:51.258605] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:54.041 [2024-11-20 13:59:51.258786] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:38:54.041 malloc0 00:38:54.041 13:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:38:54.041 13:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=72815 00:38:54.041 13:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:38:54.041 13:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 72815 /var/tmp/bdevperf.sock 00:38:54.041 13:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 72815 ']' 00:38:54.041 13:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:54.041 13:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:54.041 13:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:54.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:54.041 13:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:54.041 13:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:38:54.302 [2024-11-20 13:59:51.401195] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:38:54.302 [2024-11-20 13:59:51.401324] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72815 ] 00:38:54.302 [2024-11-20 13:59:51.550018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:54.302 [2024-11-20 13:59:51.604397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:54.563 [2024-11-20 13:59:51.646491] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:38:55.136 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:55.136 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:38:55.136 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.rRY 00:38:55.137 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:38:55.396 [2024-11-20 13:59:52.570368] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:55.396 TLSTESTn1 00:38:55.396 13:59:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:55.657 Running I/O for 10 seconds... 00:38:57.557 5895.00 IOPS, 23.03 MiB/s [2024-11-20T13:59:55.819Z] 5554.00 IOPS, 21.70 MiB/s [2024-11-20T13:59:56.756Z] 5444.00 IOPS, 21.27 MiB/s [2024-11-20T13:59:58.136Z] 5458.50 IOPS, 21.32 MiB/s [2024-11-20T13:59:59.072Z] 5512.40 IOPS, 21.53 MiB/s [2024-11-20T14:00:00.010Z] 5550.50 IOPS, 21.68 MiB/s [2024-11-20T14:00:00.945Z] 5558.86 IOPS, 21.71 MiB/s [2024-11-20T14:00:01.883Z] 5551.00 IOPS, 21.68 MiB/s [2024-11-20T14:00:02.821Z] 5549.00 IOPS, 21.68 MiB/s [2024-11-20T14:00:02.821Z] 5629.00 IOPS, 21.99 MiB/s 00:39:05.498 Latency(us) 00:39:05.498 [2024-11-20T14:00:02.821Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:05.498 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:39:05.498 Verification LBA range: start 0x0 length 0x2000 00:39:05.498 TLSTESTn1 : 10.01 5635.88 22.02 0.00 0.00 22677.31 3749.00 36860.42 00:39:05.498 [2024-11-20T14:00:02.821Z] =================================================================================================================== 00:39:05.498 [2024-11-20T14:00:02.821Z] Total : 5635.88 22.02 0.00 0.00 22677.31 3749.00 36860.42 00:39:05.498 { 00:39:05.498 "results": [ 00:39:05.498 { 00:39:05.498 "job": "TLSTESTn1", 00:39:05.498 "core_mask": "0x4", 00:39:05.498 "workload": "verify", 00:39:05.498 "status": "finished", 00:39:05.498 "verify_range": { 00:39:05.498 "start": 0, 00:39:05.498 "length": 8192 00:39:05.498 }, 00:39:05.498 "queue_depth": 128, 00:39:05.498 "io_size": 4096, 00:39:05.498 "runtime": 10.010325, 00:39:05.498 "iops": 5635.880952916114, 00:39:05.498 "mibps": 22.015159972328572, 00:39:05.498 "io_failed": 0, 00:39:05.498 "io_timeout": 0, 00:39:05.498 "avg_latency_us": 22677.312953255983, 00:39:05.498 "min_latency_us": 3749.0026200873363, 00:39:05.498 "max_latency_us": 36860.42270742358 00:39:05.498 } 00:39:05.498 ], 00:39:05.498 "core_count": 1 00:39:05.498 } 00:39:05.498 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:39:05.498 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:39:05.498 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:39:05.498 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:39:05.498 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:39:05.498 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:39:05.498 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:39:05.498 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:39:05.499 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:39:05.499 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:39:05.499 nvmf_trace.0 00:39:05.759 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:39:05.759 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 72815 00:39:05.759 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 72815 ']' 00:39:05.759 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 72815 00:39:05.759 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:39:05.759 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:05.759 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72815 00:39:05.759 killing process with pid 72815 00:39:05.759 Received shutdown signal, test time was about 10.000000 seconds 00:39:05.759 00:39:05.759 Latency(us) 00:39:05.759 [2024-11-20T14:00:03.082Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:05.759 [2024-11-20T14:00:03.082Z] =================================================================================================================== 00:39:05.759 [2024-11-20T14:00:03.082Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:05.759 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:39:05.759 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:39:05.759 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72815' 00:39:05.759 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 72815 00:39:05.759 14:00:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 72815 00:39:06.019 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:39:06.019 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:06.019 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:39:06.019 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:06.019 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:39:06.019 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:06.019 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:06.019 rmmod nvme_tcp 00:39:06.019 rmmod nvme_fabrics 00:39:06.019 rmmod nvme_keyring 00:39:06.019 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:06.019 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:39:06.019 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:39:06.019 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 72779 ']' 00:39:06.019 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 72779 00:39:06.019 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 72779 ']' 00:39:06.019 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 72779 00:39:06.019 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:39:06.019 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:06.019 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72779 00:39:06.019 killing process with pid 72779 00:39:06.019 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:06.019 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:06.019 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72779' 00:39:06.019 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 72779 00:39:06.019 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 72779 00:39:06.279 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:06.279 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:06.279 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:06.279 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:39:06.279 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:39:06.279 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:06.279 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:39:06.279 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:06.279 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:39:06.279 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:39:06.279 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:39:06.279 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:39:06.279 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:39:06.539 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:39:06.539 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:39:06.539 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:39:06.539 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:39:06.539 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:39:06.539 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:39:06.539 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:39:06.539 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:39:06.539 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:39:06.539 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:39:06.539 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:06.539 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:06.539 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:06.539 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:39:06.539 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.rRY 00:39:06.539 00:39:06.539 real 0m14.732s 00:39:06.539 user 0m19.805s 00:39:06.539 sys 0m5.749s 00:39:06.539 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:06.539 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:39:06.539 ************************************ 00:39:06.539 END TEST nvmf_fips 00:39:06.539 ************************************ 00:39:06.799 14:00:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:39:06.799 14:00:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:39:06.799 14:00:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:06.799 14:00:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:39:06.799 ************************************ 00:39:06.799 START TEST nvmf_control_msg_list 00:39:06.799 ************************************ 00:39:06.799 14:00:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:39:06.799 * Looking for test storage... 00:39:06.799 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:39:06.799 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:06.799 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:39:06.799 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:06.799 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:06.799 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:06.799 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:06.799 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:06.799 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:39:06.799 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:39:06.799 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:39:06.799 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:39:06.799 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:39:06.799 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:39:06.799 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:39:06.799 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:06.799 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:39:06.799 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:39:06.799 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:06.799 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:06.799 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:39:06.799 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:39:06.799 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:06.799 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:39:06.799 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:39:06.799 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:39:06.799 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:39:06.799 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:06.799 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:39:06.799 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:39:06.799 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:06.799 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:06.799 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:39:06.799 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:06.799 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:06.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:06.799 --rc genhtml_branch_coverage=1 00:39:06.799 --rc genhtml_function_coverage=1 00:39:06.799 --rc genhtml_legend=1 00:39:06.799 --rc geninfo_all_blocks=1 00:39:06.799 --rc geninfo_unexecuted_blocks=1 00:39:06.799 00:39:06.799 ' 00:39:06.799 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:06.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:06.799 --rc genhtml_branch_coverage=1 00:39:06.799 --rc genhtml_function_coverage=1 00:39:06.799 --rc genhtml_legend=1 00:39:06.799 --rc geninfo_all_blocks=1 00:39:06.799 --rc geninfo_unexecuted_blocks=1 00:39:06.799 00:39:06.799 ' 00:39:06.799 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:06.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:06.800 --rc genhtml_branch_coverage=1 00:39:06.800 --rc genhtml_function_coverage=1 00:39:06.800 --rc genhtml_legend=1 00:39:06.800 --rc geninfo_all_blocks=1 00:39:06.800 --rc geninfo_unexecuted_blocks=1 00:39:06.800 00:39:06.800 ' 00:39:06.800 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:06.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:06.800 --rc genhtml_branch_coverage=1 00:39:06.800 --rc genhtml_function_coverage=1 00:39:06.800 --rc genhtml_legend=1 00:39:06.800 --rc geninfo_all_blocks=1 00:39:06.800 --rc geninfo_unexecuted_blocks=1 00:39:06.800 00:39:06.800 ' 00:39:06.800 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=105ec898-1662-46bd-85be-b241e399edb9 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:07.061 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:39:07.061 Cannot find device "nvmf_init_br" 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:39:07.061 Cannot find device "nvmf_init_br2" 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:39:07.061 Cannot find device "nvmf_tgt_br" 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:39:07.061 Cannot find device "nvmf_tgt_br2" 00:39:07.061 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:39:07.062 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:39:07.062 Cannot find device "nvmf_init_br" 00:39:07.062 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:39:07.062 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:39:07.062 Cannot find device "nvmf_init_br2" 00:39:07.062 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:39:07.062 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:39:07.062 Cannot find device "nvmf_tgt_br" 00:39:07.062 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:39:07.062 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:39:07.062 Cannot find device "nvmf_tgt_br2" 00:39:07.062 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:39:07.062 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:39:07.062 Cannot find device "nvmf_br" 00:39:07.062 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:39:07.062 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:39:07.062 Cannot find device "nvmf_init_if" 00:39:07.062 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:39:07.062 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:39:07.062 Cannot find device "nvmf_init_if2" 00:39:07.062 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:39:07.062 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:39:07.062 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:39:07.062 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:39:07.062 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:39:07.062 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:39:07.062 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:39:07.062 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:39:07.062 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:39:07.062 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:39:07.323 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:39:07.323 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:39:07.323 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:39:07.323 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:39:07.323 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:39:07.323 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:39:07.323 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:39:07.323 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:39:07.323 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:39:07.323 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:39:07.323 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:39:07.323 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:39:07.323 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:39:07.323 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:39:07.323 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:39:07.323 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:39:07.323 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:39:07.323 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:39:07.323 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:39:07.323 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:39:07.323 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:39:07.323 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:39:07.323 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:39:07.323 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:39:07.323 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:39:07.323 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:39:07.323 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:39:07.323 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:39:07.323 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:39:07.323 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:39:07.323 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:39:07.323 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.105 ms 00:39:07.323 00:39:07.323 --- 10.0.0.3 ping statistics --- 00:39:07.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:07.323 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:39:07.323 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:39:07.323 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:39:07.323 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.107 ms 00:39:07.323 00:39:07.323 --- 10.0.0.4 ping statistics --- 00:39:07.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:07.323 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:39:07.323 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:39:07.323 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:07.323 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:39:07.323 00:39:07.323 --- 10.0.0.1 ping statistics --- 00:39:07.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:07.323 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:39:07.324 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:39:07.324 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:07.324 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:39:07.324 00:39:07.324 --- 10.0.0.2 ping statistics --- 00:39:07.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:07.324 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:39:07.324 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:07.324 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:39:07.324 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:07.324 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:07.324 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:07.324 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:07.324 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:07.324 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:07.324 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:07.324 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:39:07.324 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:07.324 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:07.324 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:39:07.324 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=73204 00:39:07.324 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:39:07.324 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 73204 00:39:07.324 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 73204 ']' 00:39:07.324 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:07.324 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:07.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:07.324 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:07.324 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:07.324 14:00:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:39:07.583 [2024-11-20 14:00:04.664007] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:39:07.583 [2024-11-20 14:00:04.664452] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:07.583 [2024-11-20 14:00:04.815242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:07.583 [2024-11-20 14:00:04.871730] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:07.583 [2024-11-20 14:00:04.872018] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:07.583 [2024-11-20 14:00:04.872076] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:07.583 [2024-11-20 14:00:04.872136] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:07.583 [2024-11-20 14:00:04.872173] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:07.583 [2024-11-20 14:00:04.872567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:07.843 [2024-11-20 14:00:04.943585] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:39:08.413 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:08.413 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:39:08.413 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:08.413 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:08.413 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:39:08.413 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:08.413 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:39:08.413 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:39:08.413 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:39:08.414 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:08.414 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:39:08.414 [2024-11-20 14:00:05.587483] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:08.414 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:08.414 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:39:08.414 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:08.414 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:39:08.414 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:08.414 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:39:08.414 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:08.414 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:39:08.414 Malloc0 00:39:08.414 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:08.414 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:39:08.414 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:08.414 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:39:08.414 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:08.414 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:39:08.414 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:08.414 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:39:08.414 [2024-11-20 14:00:05.629484] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:39:08.414 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:08.414 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=73235 00:39:08.414 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:39:08.414 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=73236 00:39:08.414 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:39:08.414 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=73237 00:39:08.414 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:39:08.414 14:00:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 73235 00:39:08.675 [2024-11-20 14:00:05.833422] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:39:08.676 [2024-11-20 14:00:05.843561] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:39:08.676 [2024-11-20 14:00:05.843691] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:39:09.615 Initializing NVMe Controllers 00:39:09.615 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:39:09.615 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:39:09.615 Initialization complete. Launching workers. 00:39:09.615 ======================================================== 00:39:09.615 Latency(us) 00:39:09.615 Device Information : IOPS MiB/s Average min max 00:39:09.615 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 4890.00 19.10 204.27 90.16 451.84 00:39:09.615 ======================================================== 00:39:09.615 Total : 4890.00 19.10 204.27 90.16 451.84 00:39:09.615 00:39:09.615 Initializing NVMe Controllers 00:39:09.615 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:39:09.615 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:39:09.615 Initialization complete. Launching workers. 00:39:09.615 ======================================================== 00:39:09.615 Latency(us) 00:39:09.615 Device Information : IOPS MiB/s Average min max 00:39:09.615 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 4885.00 19.08 204.50 99.11 466.61 00:39:09.615 ======================================================== 00:39:09.615 Total : 4885.00 19.08 204.50 99.11 466.61 00:39:09.615 00:39:09.615 Initializing NVMe Controllers 00:39:09.615 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:39:09.615 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:39:09.615 Initialization complete. Launching workers. 00:39:09.615 ======================================================== 00:39:09.615 Latency(us) 00:39:09.615 Device Information : IOPS MiB/s Average min max 00:39:09.615 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 4887.00 19.09 204.43 90.96 496.08 00:39:09.615 ======================================================== 00:39:09.615 Total : 4887.00 19.09 204.43 90.96 496.08 00:39:09.615 00:39:09.615 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 73236 00:39:09.615 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 73237 00:39:09.615 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:39:09.615 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:39:09.615 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:09.615 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:39:09.615 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:09.615 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:39:09.615 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:09.615 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:09.615 rmmod nvme_tcp 00:39:09.875 rmmod nvme_fabrics 00:39:09.875 rmmod nvme_keyring 00:39:09.876 14:00:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:09.876 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:39:09.876 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:39:09.876 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 73204 ']' 00:39:09.876 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 73204 00:39:09.876 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 73204 ']' 00:39:09.876 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 73204 00:39:09.876 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:39:09.876 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:09.876 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73204 00:39:09.876 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:09.876 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:09.876 killing process with pid 73204 00:39:09.876 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73204' 00:39:09.876 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 73204 00:39:09.876 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 73204 00:39:10.136 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:10.136 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:10.136 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:10.136 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:39:10.136 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:39:10.136 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:10.136 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:39:10.136 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:10.136 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:39:10.136 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:39:10.136 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:39:10.136 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:39:10.136 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:39:10.136 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:39:10.136 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:39:10.136 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:39:10.136 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:39:10.136 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:39:10.395 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:39:10.395 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:39:10.395 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:39:10.395 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:39:10.395 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:39:10.395 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:10.395 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:10.395 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:10.395 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:39:10.395 00:39:10.395 real 0m3.761s 00:39:10.395 user 0m5.476s 00:39:10.395 sys 0m1.642s 00:39:10.395 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:10.395 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:39:10.395 ************************************ 00:39:10.395 END TEST nvmf_control_msg_list 00:39:10.395 ************************************ 00:39:10.395 14:00:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:39:10.395 14:00:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:39:10.395 14:00:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:10.395 14:00:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:39:10.395 ************************************ 00:39:10.395 START TEST nvmf_wait_for_buf 00:39:10.395 ************************************ 00:39:10.395 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:39:10.656 * Looking for test storage... 00:39:10.656 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:39:10.656 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:10.656 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:39:10.656 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:10.656 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:10.656 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:10.656 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:10.656 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:10.656 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:39:10.656 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:39:10.656 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:39:10.656 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:39:10.656 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:39:10.656 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:39:10.656 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:10.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:10.657 --rc genhtml_branch_coverage=1 00:39:10.657 --rc genhtml_function_coverage=1 00:39:10.657 --rc genhtml_legend=1 00:39:10.657 --rc geninfo_all_blocks=1 00:39:10.657 --rc geninfo_unexecuted_blocks=1 00:39:10.657 00:39:10.657 ' 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:10.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:10.657 --rc genhtml_branch_coverage=1 00:39:10.657 --rc genhtml_function_coverage=1 00:39:10.657 --rc genhtml_legend=1 00:39:10.657 --rc geninfo_all_blocks=1 00:39:10.657 --rc geninfo_unexecuted_blocks=1 00:39:10.657 00:39:10.657 ' 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:10.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:10.657 --rc genhtml_branch_coverage=1 00:39:10.657 --rc genhtml_function_coverage=1 00:39:10.657 --rc genhtml_legend=1 00:39:10.657 --rc geninfo_all_blocks=1 00:39:10.657 --rc geninfo_unexecuted_blocks=1 00:39:10.657 00:39:10.657 ' 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:10.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:10.657 --rc genhtml_branch_coverage=1 00:39:10.657 --rc genhtml_function_coverage=1 00:39:10.657 --rc genhtml_legend=1 00:39:10.657 --rc geninfo_all_blocks=1 00:39:10.657 --rc geninfo_unexecuted_blocks=1 00:39:10.657 00:39:10.657 ' 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=105ec898-1662-46bd-85be-b241e399edb9 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:10.657 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:39:10.657 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:39:10.658 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:39:10.658 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:10.658 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:39:10.658 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:39:10.658 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:39:10.658 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:10.658 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:39:10.658 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:39:10.658 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:39:10.658 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:39:10.658 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:39:10.658 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:39:10.658 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:10.658 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:39:10.658 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:39:10.658 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:39:10.658 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:39:10.658 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:39:10.918 Cannot find device "nvmf_init_br" 00:39:10.918 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:39:10.918 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:39:10.918 Cannot find device "nvmf_init_br2" 00:39:10.918 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:39:10.918 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:39:10.918 Cannot find device "nvmf_tgt_br" 00:39:10.918 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:39:10.918 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:39:10.918 Cannot find device "nvmf_tgt_br2" 00:39:10.918 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:39:10.918 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:39:10.918 Cannot find device "nvmf_init_br" 00:39:10.918 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:39:10.918 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:39:10.918 Cannot find device "nvmf_init_br2" 00:39:10.918 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:39:10.918 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:39:10.918 Cannot find device "nvmf_tgt_br" 00:39:10.918 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:39:10.918 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:39:10.918 Cannot find device "nvmf_tgt_br2" 00:39:10.918 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:39:10.918 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:39:10.918 Cannot find device "nvmf_br" 00:39:10.918 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:39:10.918 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:39:10.918 Cannot find device "nvmf_init_if" 00:39:10.918 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:39:10.918 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:39:10.918 Cannot find device "nvmf_init_if2" 00:39:10.918 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:39:10.918 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:39:10.918 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:39:10.918 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:39:10.918 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:39:10.918 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:39:10.918 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:39:10.918 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:39:10.918 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:39:10.918 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:39:10.918 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:39:10.918 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:39:10.918 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:39:10.918 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:39:10.918 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:39:11.178 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:39:11.178 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:39:11.178 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:39:11.178 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:39:11.178 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:39:11.178 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:39:11.178 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:39:11.178 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:39:11.178 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:39:11.178 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:39:11.178 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:39:11.178 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:39:11.178 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:39:11.178 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:39:11.178 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:39:11.178 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:39:11.178 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:39:11.178 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:39:11.178 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:39:11.178 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:39:11.178 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:39:11.179 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:39:11.179 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:39:11.179 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:39:11.179 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:39:11.179 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:39:11.179 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:39:11.179 00:39:11.179 --- 10.0.0.3 ping statistics --- 00:39:11.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:11.179 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:39:11.179 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:39:11.179 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:39:11.179 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.112 ms 00:39:11.179 00:39:11.179 --- 10.0.0.4 ping statistics --- 00:39:11.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:11.179 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:39:11.179 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:39:11.179 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:11.179 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:39:11.179 00:39:11.179 --- 10.0.0.1 ping statistics --- 00:39:11.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:11.179 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:39:11.179 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:39:11.179 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:11.179 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.110 ms 00:39:11.179 00:39:11.179 --- 10.0.0.2 ping statistics --- 00:39:11.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:11.179 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:39:11.179 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:11.179 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:39:11.179 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:11.179 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:11.179 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:11.179 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:11.179 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:11.179 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:11.179 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:11.179 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:39:11.179 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:11.179 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:11.179 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:39:11.179 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=73468 00:39:11.179 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:39:11.179 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 73468 00:39:11.179 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 73468 ']' 00:39:11.179 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:11.179 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:11.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:11.179 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:11.179 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:11.179 14:00:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:39:11.179 [2024-11-20 14:00:08.463736] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:39:11.179 [2024-11-20 14:00:08.463823] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:11.439 [2024-11-20 14:00:08.613900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:11.439 [2024-11-20 14:00:08.678566] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:11.439 [2024-11-20 14:00:08.678640] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:11.439 [2024-11-20 14:00:08.678646] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:11.439 [2024-11-20 14:00:08.678651] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:11.439 [2024-11-20 14:00:08.678655] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:11.439 [2024-11-20 14:00:08.678975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:12.015 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:12.015 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:39:12.015 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:12.015 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:12.015 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:39:12.278 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:12.278 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:39:12.278 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:39:12.278 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:39:12.278 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:12.278 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:39:12.278 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:12.278 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:39:12.278 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:12.278 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:39:12.278 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:12.278 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:39:12.278 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:12.278 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:39:12.278 [2024-11-20 14:00:09.450538] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:39:12.278 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:12.278 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:39:12.278 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:12.278 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:39:12.278 Malloc0 00:39:12.278 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:12.278 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:39:12.278 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:12.278 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:39:12.278 [2024-11-20 14:00:09.533568] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:12.278 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:12.278 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:39:12.278 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:12.278 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:39:12.278 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:12.278 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:39:12.278 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:12.278 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:39:12.278 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:12.278 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:39:12.278 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:12.278 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:39:12.278 [2024-11-20 14:00:09.569623] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:39:12.278 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:12.278 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:39:12.538 [2024-11-20 14:00:09.767827] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:39:13.920 Initializing NVMe Controllers 00:39:13.920 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:39:13.920 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:39:13.920 Initialization complete. Launching workers. 00:39:13.920 ======================================================== 00:39:13.920 Latency(us) 00:39:13.920 Device Information : IOPS MiB/s Average min max 00:39:13.920 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 508.00 63.50 7912.91 2722.47 10029.81 00:39:13.920 ======================================================== 00:39:13.920 Total : 508.00 63.50 7912.91 2722.47 10029.81 00:39:13.920 00:39:13.920 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:39:13.920 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:39:13.920 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:13.920 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:39:13.920 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:13.920 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4826 00:39:13.920 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4826 -eq 0 ]] 00:39:13.920 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:39:13.920 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:39:13.920 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:13.920 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:39:13.920 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:13.920 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:39:13.920 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:13.920 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:13.920 rmmod nvme_tcp 00:39:13.920 rmmod nvme_fabrics 00:39:13.920 rmmod nvme_keyring 00:39:13.920 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:13.920 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:39:13.920 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:39:13.920 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 73468 ']' 00:39:13.920 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 73468 00:39:13.920 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 73468 ']' 00:39:13.920 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 73468 00:39:13.920 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:39:13.920 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:14.179 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73468 00:39:14.179 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:14.179 killing process with pid 73468 00:39:14.179 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:14.179 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73468' 00:39:14.179 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 73468 00:39:14.179 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 73468 00:39:14.439 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:14.439 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:14.439 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:14.439 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:39:14.439 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:39:14.439 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:14.439 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:39:14.439 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:14.439 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:39:14.439 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:39:14.439 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:39:14.439 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:39:14.439 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:39:14.439 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:39:14.439 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:39:14.439 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:39:14.439 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:39:14.439 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:39:14.439 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:39:14.439 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:39:14.439 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:39:14.439 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:39:14.699 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:39:14.699 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:14.699 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:14.699 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:14.699 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:39:14.699 00:39:14.699 real 0m4.107s 00:39:14.699 user 0m3.438s 00:39:14.699 sys 0m0.967s 00:39:14.699 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:14.699 ************************************ 00:39:14.699 END TEST nvmf_wait_for_buf 00:39:14.699 ************************************ 00:39:14.699 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:39:14.699 14:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:39:14.699 14:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:39:14.699 14:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:39:14.699 14:00:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:39:14.699 14:00:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:14.699 14:00:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:39:14.699 ************************************ 00:39:14.699 START TEST nvmf_nsid 00:39:14.699 ************************************ 00:39:14.699 14:00:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:39:14.699 * Looking for test storage... 00:39:14.699 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:39:14.699 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:14.699 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:39:14.699 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:14.966 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:14.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:14.967 --rc genhtml_branch_coverage=1 00:39:14.967 --rc genhtml_function_coverage=1 00:39:14.967 --rc genhtml_legend=1 00:39:14.967 --rc geninfo_all_blocks=1 00:39:14.967 --rc geninfo_unexecuted_blocks=1 00:39:14.967 00:39:14.967 ' 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:14.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:14.967 --rc genhtml_branch_coverage=1 00:39:14.967 --rc genhtml_function_coverage=1 00:39:14.967 --rc genhtml_legend=1 00:39:14.967 --rc geninfo_all_blocks=1 00:39:14.967 --rc geninfo_unexecuted_blocks=1 00:39:14.967 00:39:14.967 ' 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:14.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:14.967 --rc genhtml_branch_coverage=1 00:39:14.967 --rc genhtml_function_coverage=1 00:39:14.967 --rc genhtml_legend=1 00:39:14.967 --rc geninfo_all_blocks=1 00:39:14.967 --rc geninfo_unexecuted_blocks=1 00:39:14.967 00:39:14.967 ' 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:14.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:14.967 --rc genhtml_branch_coverage=1 00:39:14.967 --rc genhtml_function_coverage=1 00:39:14.967 --rc genhtml_legend=1 00:39:14.967 --rc geninfo_all_blocks=1 00:39:14.967 --rc geninfo_unexecuted_blocks=1 00:39:14.967 00:39:14.967 ' 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=105ec898-1662-46bd-85be-b241e399edb9 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:14.967 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:39:14.967 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:14.968 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:14.968 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:14.968 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:14.968 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:14.968 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:14.968 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:14.968 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:14.968 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:39:14.968 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:39:14.968 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:39:14.968 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:39:14.968 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:39:14.968 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:39:14.968 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:14.968 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:39:14.968 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:39:14.968 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:39:14.968 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:14.968 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:39:14.968 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:39:14.968 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:39:14.968 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:39:14.968 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:39:14.968 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:39:14.968 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:14.968 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:39:14.968 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:39:14.968 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:39:14.968 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:39:14.968 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:39:14.968 Cannot find device "nvmf_init_br" 00:39:14.968 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:39:14.968 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:39:14.968 Cannot find device "nvmf_init_br2" 00:39:14.968 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:39:14.968 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:39:14.968 Cannot find device "nvmf_tgt_br" 00:39:14.968 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:39:14.968 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:39:14.968 Cannot find device "nvmf_tgt_br2" 00:39:14.968 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:39:14.968 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:39:14.968 Cannot find device "nvmf_init_br" 00:39:14.968 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:39:14.968 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:39:14.968 Cannot find device "nvmf_init_br2" 00:39:14.968 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:39:14.968 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:39:15.241 Cannot find device "nvmf_tgt_br" 00:39:15.241 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:39:15.241 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:39:15.241 Cannot find device "nvmf_tgt_br2" 00:39:15.241 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:39:15.241 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:39:15.241 Cannot find device "nvmf_br" 00:39:15.241 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:39:15.241 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:39:15.241 Cannot find device "nvmf_init_if" 00:39:15.241 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:39:15.241 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:39:15.241 Cannot find device "nvmf_init_if2" 00:39:15.241 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:39:15.241 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:39:15.241 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:39:15.241 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:39:15.241 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:39:15.241 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:39:15.241 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:39:15.241 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:39:15.241 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:39:15.241 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:39:15.241 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:39:15.241 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:39:15.241 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:39:15.241 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:39:15.241 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:39:15.241 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:39:15.241 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:39:15.241 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:39:15.241 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:39:15.241 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:39:15.241 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:39:15.241 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:39:15.241 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:39:15.241 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:39:15.241 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:39:15.241 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:39:15.241 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:39:15.241 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:39:15.241 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:39:15.241 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:39:15.241 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:39:15.241 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:39:15.241 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:39:15.241 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:39:15.241 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:39:15.241 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:39:15.241 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:39:15.241 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:39:15.241 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:39:15.501 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:39:15.501 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:39:15.501 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:39:15.501 00:39:15.501 --- 10.0.0.3 ping statistics --- 00:39:15.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:15.501 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:39:15.501 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:39:15.501 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:39:15.501 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:39:15.501 00:39:15.501 --- 10.0.0.4 ping statistics --- 00:39:15.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:15.501 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:39:15.501 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:39:15.501 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:15.501 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:39:15.501 00:39:15.501 --- 10.0.0.1 ping statistics --- 00:39:15.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:15.501 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:39:15.501 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:39:15.501 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:15.501 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:39:15.501 00:39:15.502 --- 10.0.0.2 ping statistics --- 00:39:15.502 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:15.502 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:39:15.502 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:15.502 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:39:15.502 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:15.502 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:15.502 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:15.502 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:15.502 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:15.502 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:15.502 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:15.502 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:39:15.502 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:15.502 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:15.502 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:39:15.502 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=73742 00:39:15.502 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 73742 00:39:15.502 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:39:15.502 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 73742 ']' 00:39:15.502 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:15.502 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:15.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:15.502 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:15.502 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:15.502 14:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:39:15.502 [2024-11-20 14:00:12.654159] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:39:15.502 [2024-11-20 14:00:12.654239] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:15.502 [2024-11-20 14:00:12.801047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:15.761 [2024-11-20 14:00:12.859904] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:15.761 [2024-11-20 14:00:12.859971] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:15.761 [2024-11-20 14:00:12.859978] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:15.761 [2024-11-20 14:00:12.859983] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:15.761 [2024-11-20 14:00:12.859988] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:15.761 [2024-11-20 14:00:12.860388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:15.761 [2024-11-20 14:00:12.932460] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:39:16.330 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:16.330 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:39:16.330 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:16.330 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:16.330 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:39:16.330 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:16.330 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:39:16.330 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=73771 00:39:16.330 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:39:16.330 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:39:16.330 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:39:16.330 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:39:16.330 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:39:16.330 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:39:16.330 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:16.330 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:16.330 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:39:16.330 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:16.330 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:39:16.330 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:39:16.330 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:39:16.330 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:39:16.330 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:39:16.330 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=0019fb9e-2225-4bcc-9d63-e7a24f428eee 00:39:16.330 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:39:16.330 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=428dbd77-55f9-465b-adbe-c74be437ff09 00:39:16.330 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:39:16.330 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=533376e9-c2b4-4683-99c8-bde712ad32f1 00:39:16.330 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:39:16.330 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:16.330 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:39:16.330 null0 00:39:16.330 null1 00:39:16.330 [2024-11-20 14:00:13.633916] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:39:16.330 [2024-11-20 14:00:13.633971] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73771 ] 00:39:16.330 null2 00:39:16.330 [2024-11-20 14:00:13.644172] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:16.591 [2024-11-20 14:00:13.668256] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:39:16.591 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:16.591 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 73771 /var/tmp/tgt2.sock 00:39:16.591 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 73771 ']' 00:39:16.591 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:39:16.591 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:16.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:39:16.591 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:39:16.591 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:16.591 14:00:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:39:16.591 [2024-11-20 14:00:13.782443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:16.591 [2024-11-20 14:00:13.837756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:16.591 [2024-11-20 14:00:13.895007] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:39:16.851 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:16.851 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:39:16.851 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:39:17.110 [2024-11-20 14:00:14.408213] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:17.110 [2024-11-20 14:00:14.424260] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:39:17.370 nvme0n1 nvme0n2 00:39:17.370 nvme1n1 00:39:17.370 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:39:17.370 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:39:17.370 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid=105ec898-1662-46bd-85be-b241e399edb9 00:39:17.370 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:39:17.370 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:39:17.370 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:39:17.370 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:39:17.370 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:39:17.370 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:39:17.370 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:39:17.370 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:39:17.370 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:39:17.370 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:39:17.370 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:39:17.370 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:39:17.370 14:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:39:18.751 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:39:18.751 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:39:18.751 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:39:18.751 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:39:18.751 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:39:18.751 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 0019fb9e-2225-4bcc-9d63-e7a24f428eee 00:39:18.751 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:39:18.751 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:39:18.751 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:39:18.751 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:39:18.751 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:39:18.751 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=0019fb9e22254bcc9d63e7a24f428eee 00:39:18.751 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 0019FB9E22254BCC9D63E7A24F428EEE 00:39:18.751 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 0019FB9E22254BCC9D63E7A24F428EEE == \0\0\1\9\F\B\9\E\2\2\2\5\4\B\C\C\9\D\6\3\E\7\A\2\4\F\4\2\8\E\E\E ]] 00:39:18.751 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:39:18.751 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:39:18.751 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:39:18.751 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:39:18.751 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:39:18.751 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:39:18.751 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:39:18.751 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 428dbd77-55f9-465b-adbe-c74be437ff09 00:39:18.751 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:39:18.751 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:39:18.751 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:39:18.751 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:39:18.751 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:39:18.751 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=428dbd7755f9465badbec74be437ff09 00:39:18.751 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 428DBD7755F9465BADBEC74BE437FF09 00:39:18.751 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 428DBD7755F9465BADBEC74BE437FF09 == \4\2\8\D\B\D\7\7\5\5\F\9\4\6\5\B\A\D\B\E\C\7\4\B\E\4\3\7\F\F\0\9 ]] 00:39:18.751 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:39:18.751 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:39:18.751 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:39:18.751 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:39:18.751 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:39:18.751 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:39:18.751 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:39:18.751 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 533376e9-c2b4-4683-99c8-bde712ad32f1 00:39:18.751 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:39:18.751 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:39:18.751 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:39:18.752 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:39:18.752 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:39:18.752 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=533376e9c2b4468399c8bde712ad32f1 00:39:18.752 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 533376E9C2B4468399C8BDE712AD32F1 00:39:18.752 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 533376E9C2B4468399C8BDE712AD32F1 == \5\3\3\3\7\6\E\9\C\2\B\4\4\6\8\3\9\9\C\8\B\D\E\7\1\2\A\D\3\2\F\1 ]] 00:39:18.752 14:00:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:39:18.752 14:00:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:39:18.752 14:00:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:39:18.752 14:00:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 73771 00:39:18.752 14:00:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 73771 ']' 00:39:18.752 14:00:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 73771 00:39:18.752 14:00:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:39:18.752 14:00:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:18.752 14:00:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73771 00:39:19.011 14:00:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:19.011 14:00:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:19.011 killing process with pid 73771 00:39:19.011 14:00:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73771' 00:39:19.011 14:00:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 73771 00:39:19.011 14:00:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 73771 00:39:19.272 14:00:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:39:19.272 14:00:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:19.272 14:00:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:39:20.217 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:20.217 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:39:20.217 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:20.217 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:20.217 rmmod nvme_tcp 00:39:20.217 rmmod nvme_fabrics 00:39:20.217 rmmod nvme_keyring 00:39:20.217 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:20.217 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:39:20.217 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:39:20.217 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 73742 ']' 00:39:20.217 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 73742 00:39:20.217 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 73742 ']' 00:39:20.217 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 73742 00:39:20.217 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:39:20.217 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:20.217 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73742 00:39:20.217 killing process with pid 73742 00:39:20.217 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:20.217 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:20.217 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73742' 00:39:20.217 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 73742 00:39:20.217 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 73742 00:39:20.477 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:20.477 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:20.477 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:20.477 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:39:20.477 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:39:20.477 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:20.477 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:39:20.477 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:20.477 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:39:20.477 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:39:20.477 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:39:20.477 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:39:20.477 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:39:20.477 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:39:20.477 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:39:20.477 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:39:20.477 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:39:20.477 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:39:20.477 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:39:20.477 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:39:20.477 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:39:20.477 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:39:20.737 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:39:20.738 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:20.738 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:20.738 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:20.738 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:39:20.738 00:39:20.738 real 0m5.968s 00:39:20.738 user 0m7.873s 00:39:20.738 sys 0m1.889s 00:39:20.738 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:20.738 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:39:20.738 ************************************ 00:39:20.738 END TEST nvmf_nsid 00:39:20.738 ************************************ 00:39:20.738 14:00:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:39:20.738 ************************************ 00:39:20.738 END TEST nvmf_target_extra 00:39:20.738 ************************************ 00:39:20.738 00:39:20.738 real 4m41.076s 00:39:20.738 user 9m15.529s 00:39:20.738 sys 1m8.520s 00:39:20.738 14:00:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:20.738 14:00:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:39:20.738 14:00:17 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:39:20.738 14:00:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:39:20.738 14:00:17 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:20.738 14:00:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:20.738 ************************************ 00:39:20.738 START TEST nvmf_host 00:39:20.738 ************************************ 00:39:20.738 14:00:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:39:20.998 * Looking for test storage... 00:39:20.998 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:39:20.998 14:00:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:20.998 14:00:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:39:20.998 14:00:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:20.998 14:00:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:20.998 14:00:18 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:20.998 14:00:18 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:20.998 14:00:18 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:20.998 14:00:18 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:39:20.998 14:00:18 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:39:20.998 14:00:18 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:39:20.998 14:00:18 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:39:20.998 14:00:18 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:39:20.998 14:00:18 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:39:20.998 14:00:18 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:39:20.998 14:00:18 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:20.998 14:00:18 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:39:20.999 14:00:18 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:39:20.999 14:00:18 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:20.999 14:00:18 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:20.999 14:00:18 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:39:20.999 14:00:18 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:39:20.999 14:00:18 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:20.999 14:00:18 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:39:20.999 14:00:18 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:39:20.999 14:00:18 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:39:20.999 14:00:18 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:39:20.999 14:00:18 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:20.999 14:00:18 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:39:20.999 14:00:18 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:39:20.999 14:00:18 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:20.999 14:00:18 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:20.999 14:00:18 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:39:20.999 14:00:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:20.999 14:00:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:20.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:20.999 --rc genhtml_branch_coverage=1 00:39:20.999 --rc genhtml_function_coverage=1 00:39:20.999 --rc genhtml_legend=1 00:39:20.999 --rc geninfo_all_blocks=1 00:39:20.999 --rc geninfo_unexecuted_blocks=1 00:39:20.999 00:39:20.999 ' 00:39:20.999 14:00:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:20.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:20.999 --rc genhtml_branch_coverage=1 00:39:20.999 --rc genhtml_function_coverage=1 00:39:20.999 --rc genhtml_legend=1 00:39:20.999 --rc geninfo_all_blocks=1 00:39:20.999 --rc geninfo_unexecuted_blocks=1 00:39:20.999 00:39:20.999 ' 00:39:20.999 14:00:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:20.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:20.999 --rc genhtml_branch_coverage=1 00:39:20.999 --rc genhtml_function_coverage=1 00:39:20.999 --rc genhtml_legend=1 00:39:20.999 --rc geninfo_all_blocks=1 00:39:20.999 --rc geninfo_unexecuted_blocks=1 00:39:20.999 00:39:20.999 ' 00:39:20.999 14:00:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:20.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:20.999 --rc genhtml_branch_coverage=1 00:39:20.999 --rc genhtml_function_coverage=1 00:39:20.999 --rc genhtml_legend=1 00:39:20.999 --rc geninfo_all_blocks=1 00:39:20.999 --rc geninfo_unexecuted_blocks=1 00:39:20.999 00:39:20.999 ' 00:39:20.999 14:00:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:39:20.999 14:00:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:39:20.999 14:00:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:20.999 14:00:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:20.999 14:00:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:20.999 14:00:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:20.999 14:00:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:20.999 14:00:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:20.999 14:00:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:20.999 14:00:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:20.999 14:00:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:20.999 14:00:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:20.999 14:00:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:39:20.999 14:00:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=105ec898-1662-46bd-85be-b241e399edb9 00:39:20.999 14:00:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:20.999 14:00:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:20.999 14:00:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:39:20.999 14:00:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:20.999 14:00:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:20.999 14:00:18 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:39:20.999 14:00:18 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:20.999 14:00:18 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:20.999 14:00:18 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:20.999 14:00:18 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:20.999 14:00:18 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:20.999 14:00:18 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:20.999 14:00:18 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:39:20.999 14:00:18 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:20.999 14:00:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:39:20.999 14:00:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:20.999 14:00:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:20.999 14:00:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:20.999 14:00:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:20.999 14:00:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:20.999 14:00:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:20.999 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:20.999 14:00:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:20.999 14:00:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:20.999 14:00:18 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:20.999 14:00:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:39:20.999 14:00:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:39:20.999 14:00:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:39:20.999 14:00:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:39:20.999 14:00:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:39:20.999 14:00:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:20.999 14:00:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:39:21.000 ************************************ 00:39:21.000 START TEST nvmf_identify 00:39:21.000 ************************************ 00:39:21.000 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:39:21.260 * Looking for test storage... 00:39:21.260 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:39:21.260 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:21.260 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:21.260 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:39:21.260 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:21.260 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:21.260 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:21.260 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:21.260 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:39:21.260 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:39:21.260 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:39:21.260 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:39:21.260 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:39:21.260 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:39:21.260 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:39:21.260 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:21.260 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:39:21.260 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:39:21.260 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:21.260 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:21.260 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:39:21.260 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:39:21.260 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:21.260 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:39:21.260 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:39:21.260 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:39:21.260 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:39:21.260 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:21.260 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:39:21.260 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:39:21.260 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:21.260 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:21.260 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:39:21.260 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:21.260 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:21.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:21.260 --rc genhtml_branch_coverage=1 00:39:21.260 --rc genhtml_function_coverage=1 00:39:21.260 --rc genhtml_legend=1 00:39:21.260 --rc geninfo_all_blocks=1 00:39:21.260 --rc geninfo_unexecuted_blocks=1 00:39:21.260 00:39:21.260 ' 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:21.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:21.261 --rc genhtml_branch_coverage=1 00:39:21.261 --rc genhtml_function_coverage=1 00:39:21.261 --rc genhtml_legend=1 00:39:21.261 --rc geninfo_all_blocks=1 00:39:21.261 --rc geninfo_unexecuted_blocks=1 00:39:21.261 00:39:21.261 ' 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:21.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:21.261 --rc genhtml_branch_coverage=1 00:39:21.261 --rc genhtml_function_coverage=1 00:39:21.261 --rc genhtml_legend=1 00:39:21.261 --rc geninfo_all_blocks=1 00:39:21.261 --rc geninfo_unexecuted_blocks=1 00:39:21.261 00:39:21.261 ' 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:21.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:21.261 --rc genhtml_branch_coverage=1 00:39:21.261 --rc genhtml_function_coverage=1 00:39:21.261 --rc genhtml_legend=1 00:39:21.261 --rc geninfo_all_blocks=1 00:39:21.261 --rc geninfo_unexecuted_blocks=1 00:39:21.261 00:39:21.261 ' 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=105ec898-1662-46bd-85be-b241e399edb9 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:21.261 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:39:21.261 Cannot find device "nvmf_init_br" 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:39:21.261 Cannot find device "nvmf_init_br2" 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:39:21.261 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:39:21.521 Cannot find device "nvmf_tgt_br" 00:39:21.521 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:39:21.521 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:39:21.521 Cannot find device "nvmf_tgt_br2" 00:39:21.521 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:39:21.521 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:39:21.521 Cannot find device "nvmf_init_br" 00:39:21.521 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:39:21.521 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:39:21.521 Cannot find device "nvmf_init_br2" 00:39:21.521 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:39:21.521 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:39:21.521 Cannot find device "nvmf_tgt_br" 00:39:21.521 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:39:21.521 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:39:21.521 Cannot find device "nvmf_tgt_br2" 00:39:21.521 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:39:21.521 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:39:21.521 Cannot find device "nvmf_br" 00:39:21.521 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:39:21.521 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:39:21.521 Cannot find device "nvmf_init_if" 00:39:21.521 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:39:21.521 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:39:21.521 Cannot find device "nvmf_init_if2" 00:39:21.521 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:39:21.521 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:39:21.521 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:39:21.522 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:39:21.522 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:39:21.522 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:39:21.522 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:39:21.522 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:39:21.522 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:39:21.522 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:39:21.522 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:39:21.522 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:39:21.522 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:39:21.522 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:39:21.782 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:39:21.782 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:39:21.782 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:39:21.782 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:39:21.782 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:39:21.782 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:39:21.782 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:39:21.782 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:39:21.782 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:39:21.782 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:39:21.782 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:39:21.782 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:39:21.782 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:39:21.782 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:39:21.782 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:39:21.782 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:39:21.782 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:39:21.782 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:39:21.782 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:39:21.782 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:39:21.782 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:39:21.782 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:39:21.782 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:39:21.782 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:39:21.782 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:39:21.782 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:39:21.782 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:39:21.782 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.099 ms 00:39:21.782 00:39:21.782 --- 10.0.0.3 ping statistics --- 00:39:21.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:21.782 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:39:21.782 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:39:21.782 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:39:21.782 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.068 ms 00:39:21.782 00:39:21.782 --- 10.0.0.4 ping statistics --- 00:39:21.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:21.782 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:39:21.782 14:00:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:39:21.782 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:21.782 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:39:21.782 00:39:21.782 --- 10.0.0.1 ping statistics --- 00:39:21.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:21.782 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:39:21.782 14:00:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:39:21.782 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:21.782 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:39:21.782 00:39:21.782 --- 10.0.0.2 ping statistics --- 00:39:21.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:21.782 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:39:21.782 14:00:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:21.782 14:00:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:39:21.782 14:00:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:21.782 14:00:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:21.782 14:00:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:21.782 14:00:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:21.782 14:00:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:21.782 14:00:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:21.782 14:00:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:21.782 14:00:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:39:21.782 14:00:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:21.782 14:00:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:39:21.782 14:00:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=74136 00:39:21.782 14:00:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:39:21.782 14:00:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:21.782 14:00:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 74136 00:39:21.782 14:00:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 74136 ']' 00:39:21.782 14:00:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:21.783 14:00:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:21.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:21.783 14:00:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:21.783 14:00:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:21.783 14:00:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:39:21.783 [2024-11-20 14:00:19.102665] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:39:21.783 [2024-11-20 14:00:19.102738] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:22.041 [2024-11-20 14:00:19.255996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:22.041 [2024-11-20 14:00:19.317523] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:22.041 [2024-11-20 14:00:19.317578] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:22.041 [2024-11-20 14:00:19.317585] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:22.041 [2024-11-20 14:00:19.317590] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:22.041 [2024-11-20 14:00:19.317595] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:22.041 [2024-11-20 14:00:19.318495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:22.041 [2024-11-20 14:00:19.318605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:22.041 [2024-11-20 14:00:19.318696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:22.041 [2024-11-20 14:00:19.318740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:22.299 [2024-11-20 14:00:19.384288] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:39:22.869 14:00:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:22.869 14:00:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:39:22.869 14:00:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:22.869 14:00:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:22.869 14:00:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:39:22.869 [2024-11-20 14:00:19.976539] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:22.869 14:00:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:22.869 14:00:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:39:22.869 14:00:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:22.869 14:00:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:39:22.869 14:00:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:22.869 14:00:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:22.869 14:00:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:39:22.869 Malloc0 00:39:22.869 14:00:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:22.869 14:00:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:22.869 14:00:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:22.869 14:00:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:39:22.869 14:00:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:22.869 14:00:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:39:22.869 14:00:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:22.869 14:00:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:39:22.869 14:00:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:22.869 14:00:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:39:22.869 14:00:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:22.869 14:00:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:39:22.869 [2024-11-20 14:00:20.104719] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:39:22.869 14:00:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:22.869 14:00:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:39:22.869 14:00:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:22.869 14:00:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:39:22.869 14:00:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:22.869 14:00:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:39:22.869 14:00:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:22.869 14:00:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:39:22.869 [ 00:39:22.869 { 00:39:22.869 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:39:22.869 "subtype": "Discovery", 00:39:22.869 "listen_addresses": [ 00:39:22.869 { 00:39:22.869 "trtype": "TCP", 00:39:22.869 "adrfam": "IPv4", 00:39:22.869 "traddr": "10.0.0.3", 00:39:22.869 "trsvcid": "4420" 00:39:22.869 } 00:39:22.869 ], 00:39:22.869 "allow_any_host": true, 00:39:22.869 "hosts": [] 00:39:22.869 }, 00:39:22.869 { 00:39:22.869 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:39:22.869 "subtype": "NVMe", 00:39:22.869 "listen_addresses": [ 00:39:22.869 { 00:39:22.869 "trtype": "TCP", 00:39:22.869 "adrfam": "IPv4", 00:39:22.869 "traddr": "10.0.0.3", 00:39:22.869 "trsvcid": "4420" 00:39:22.869 } 00:39:22.869 ], 00:39:22.869 "allow_any_host": true, 00:39:22.869 "hosts": [], 00:39:22.869 "serial_number": "SPDK00000000000001", 00:39:22.869 "model_number": "SPDK bdev Controller", 00:39:22.869 "max_namespaces": 32, 00:39:22.869 "min_cntlid": 1, 00:39:22.869 "max_cntlid": 65519, 00:39:22.869 "namespaces": [ 00:39:22.869 { 00:39:22.869 "nsid": 1, 00:39:22.869 "bdev_name": "Malloc0", 00:39:22.869 "name": "Malloc0", 00:39:22.869 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:39:22.869 "eui64": "ABCDEF0123456789", 00:39:22.869 "uuid": "e11a4505-f571-4dfc-9882-b16d55ce3637" 00:39:22.869 } 00:39:22.869 ] 00:39:22.869 } 00:39:22.869 ] 00:39:22.869 14:00:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:22.869 14:00:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:39:22.869 [2024-11-20 14:00:20.171158] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:39:22.869 [2024-11-20 14:00:20.171202] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74171 ] 00:39:23.132 [2024-11-20 14:00:20.311749] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:39:23.132 [2024-11-20 14:00:20.311859] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:39:23.132 [2024-11-20 14:00:20.311881] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:39:23.132 [2024-11-20 14:00:20.311909] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:39:23.132 [2024-11-20 14:00:20.314957] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:39:23.132 [2024-11-20 14:00:20.315288] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:39:23.132 [2024-11-20 14:00:20.315352] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xcb8750 0 00:39:23.132 [2024-11-20 14:00:20.326724] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:39:23.132 [2024-11-20 14:00:20.326742] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:39:23.132 [2024-11-20 14:00:20.326746] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:39:23.132 [2024-11-20 14:00:20.326748] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:39:23.132 [2024-11-20 14:00:20.326775] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:39:23.132 [2024-11-20 14:00:20.326780] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.132 [2024-11-20 14:00:20.326783] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcb8750) 00:39:23.132 [2024-11-20 14:00:20.326796] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:39:23.132 [2024-11-20 14:00:20.326828] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd1c740, cid 0, qid 0 00:39:23.132 [2024-11-20 14:00:20.333733] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:39:23.132 [2024-11-20 14:00:20.333746] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:39:23.132 [2024-11-20 14:00:20.333749] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:39:23.132 [2024-11-20 14:00:20.333752] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd1c740) on tqpair=0xcb8750 00:39:23.132 [2024-11-20 14:00:20.333762] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:39:23.132 [2024-11-20 14:00:20.333769] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:39:23.132 [2024-11-20 14:00:20.333773] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:39:23.132 [2024-11-20 14:00:20.333802] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:39:23.132 [2024-11-20 14:00:20.333805] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.132 [2024-11-20 14:00:20.333808] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcb8750) 00:39:23.132 [2024-11-20 14:00:20.333815] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:23.132 [2024-11-20 14:00:20.333832] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd1c740, cid 0, qid 0 00:39:23.132 [2024-11-20 14:00:20.333885] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:39:23.132 [2024-11-20 14:00:20.333890] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:39:23.132 [2024-11-20 14:00:20.333892] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:39:23.132 [2024-11-20 14:00:20.333895] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd1c740) on tqpair=0xcb8750 00:39:23.132 [2024-11-20 14:00:20.333900] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:39:23.132 [2024-11-20 14:00:20.333904] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:39:23.132 [2024-11-20 14:00:20.333909] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:39:23.132 [2024-11-20 14:00:20.333912] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.132 [2024-11-20 14:00:20.333917] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcb8750) 00:39:23.132 [2024-11-20 14:00:20.333921] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:23.132 [2024-11-20 14:00:20.333932] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd1c740, cid 0, qid 0 00:39:23.132 [2024-11-20 14:00:20.333979] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:39:23.132 [2024-11-20 14:00:20.333983] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:39:23.132 [2024-11-20 14:00:20.333986] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:39:23.132 [2024-11-20 14:00:20.333988] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd1c740) on tqpair=0xcb8750 00:39:23.132 [2024-11-20 14:00:20.333992] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:39:23.132 [2024-11-20 14:00:20.333997] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:39:23.132 [2024-11-20 14:00:20.334002] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:39:23.132 [2024-11-20 14:00:20.334004] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.132 [2024-11-20 14:00:20.334007] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcb8750) 00:39:23.132 [2024-11-20 14:00:20.334011] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:23.132 [2024-11-20 14:00:20.334020] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd1c740, cid 0, qid 0 00:39:23.132 [2024-11-20 14:00:20.334064] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:39:23.132 [2024-11-20 14:00:20.334069] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:39:23.132 [2024-11-20 14:00:20.334071] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:39:23.132 [2024-11-20 14:00:20.334073] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd1c740) on tqpair=0xcb8750 00:39:23.132 [2024-11-20 14:00:20.334077] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:39:23.132 [2024-11-20 14:00:20.334084] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:39:23.132 [2024-11-20 14:00:20.334087] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.132 [2024-11-20 14:00:20.334089] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcb8750) 00:39:23.132 [2024-11-20 14:00:20.334094] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:23.132 [2024-11-20 14:00:20.334103] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd1c740, cid 0, qid 0 00:39:23.132 [2024-11-20 14:00:20.334160] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:39:23.132 [2024-11-20 14:00:20.334164] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:39:23.132 [2024-11-20 14:00:20.334166] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:39:23.132 [2024-11-20 14:00:20.334169] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd1c740) on tqpair=0xcb8750 00:39:23.132 [2024-11-20 14:00:20.334173] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:39:23.132 [2024-11-20 14:00:20.334176] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:39:23.133 [2024-11-20 14:00:20.334181] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:39:23.133 [2024-11-20 14:00:20.334287] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:39:23.133 [2024-11-20 14:00:20.334298] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:39:23.133 [2024-11-20 14:00:20.334305] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:39:23.133 [2024-11-20 14:00:20.334307] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.133 [2024-11-20 14:00:20.334310] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcb8750) 00:39:23.133 [2024-11-20 14:00:20.334314] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:23.133 [2024-11-20 14:00:20.334325] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd1c740, cid 0, qid 0 00:39:23.133 [2024-11-20 14:00:20.334367] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:39:23.133 [2024-11-20 14:00:20.334372] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:39:23.133 [2024-11-20 14:00:20.334374] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:39:23.133 [2024-11-20 14:00:20.334376] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd1c740) on tqpair=0xcb8750 00:39:23.133 [2024-11-20 14:00:20.334380] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:39:23.133 [2024-11-20 14:00:20.334386] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:39:23.133 [2024-11-20 14:00:20.334388] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.133 [2024-11-20 14:00:20.334392] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcb8750) 00:39:23.133 [2024-11-20 14:00:20.334397] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:23.133 [2024-11-20 14:00:20.334406] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd1c740, cid 0, qid 0 00:39:23.133 [2024-11-20 14:00:20.334452] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:39:23.133 [2024-11-20 14:00:20.334456] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:39:23.133 [2024-11-20 14:00:20.334458] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:39:23.133 [2024-11-20 14:00:20.334461] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd1c740) on tqpair=0xcb8750 00:39:23.133 [2024-11-20 14:00:20.334464] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:39:23.133 [2024-11-20 14:00:20.334467] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:39:23.133 [2024-11-20 14:00:20.334472] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:39:23.133 [2024-11-20 14:00:20.334482] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:39:23.133 [2024-11-20 14:00:20.334490] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.133 [2024-11-20 14:00:20.334493] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcb8750) 00:39:23.133 [2024-11-20 14:00:20.334498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:23.133 [2024-11-20 14:00:20.334508] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd1c740, cid 0, qid 0 00:39:23.133 [2024-11-20 14:00:20.334590] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:39:23.133 [2024-11-20 14:00:20.334595] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:39:23.133 [2024-11-20 14:00:20.334597] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:39:23.133 [2024-11-20 14:00:20.334600] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcb8750): datao=0, datal=4096, cccid=0 00:39:23.133 [2024-11-20 14:00:20.334603] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd1c740) on tqpair(0xcb8750): expected_datao=0, payload_size=4096 00:39:23.133 [2024-11-20 14:00:20.334606] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:39:23.133 [2024-11-20 14:00:20.334613] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:39:23.133 [2024-11-20 14:00:20.334616] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:39:23.133 [2024-11-20 14:00:20.334622] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:39:23.133 [2024-11-20 14:00:20.334626] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:39:23.133 [2024-11-20 14:00:20.334631] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:39:23.133 [2024-11-20 14:00:20.334633] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd1c740) on tqpair=0xcb8750 00:39:23.133 [2024-11-20 14:00:20.334639] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:39:23.133 [2024-11-20 14:00:20.334642] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:39:23.133 [2024-11-20 14:00:20.334645] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:39:23.133 [2024-11-20 14:00:20.334648] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:39:23.133 [2024-11-20 14:00:20.334651] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:39:23.133 [2024-11-20 14:00:20.334654] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:39:23.133 [2024-11-20 14:00:20.334662] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:39:23.133 [2024-11-20 14:00:20.334667] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:39:23.133 [2024-11-20 14:00:20.334669] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.133 [2024-11-20 14:00:20.334674] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcb8750) 00:39:23.133 [2024-11-20 14:00:20.334679] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:39:23.133 [2024-11-20 14:00:20.334689] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd1c740, cid 0, qid 0 00:39:23.133 [2024-11-20 14:00:20.334763] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:39:23.133 [2024-11-20 14:00:20.334768] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:39:23.133 [2024-11-20 14:00:20.334771] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:39:23.133 [2024-11-20 14:00:20.334773] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd1c740) on tqpair=0xcb8750 00:39:23.133 [2024-11-20 14:00:20.334779] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:39:23.133 [2024-11-20 14:00:20.334781] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.133 [2024-11-20 14:00:20.334784] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcb8750) 00:39:23.133 [2024-11-20 14:00:20.334788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:39:23.133 [2024-11-20 14:00:20.334792] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:39:23.133 [2024-11-20 14:00:20.334795] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.133 [2024-11-20 14:00:20.334797] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xcb8750) 00:39:23.133 [2024-11-20 14:00:20.334801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:39:23.133 [2024-11-20 14:00:20.334805] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:39:23.133 [2024-11-20 14:00:20.334807] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.133 [2024-11-20 14:00:20.334809] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xcb8750) 00:39:23.133 [2024-11-20 14:00:20.334813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:39:23.133 [2024-11-20 14:00:20.334817] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:39:23.133 [2024-11-20 14:00:20.334828] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.133 [2024-11-20 14:00:20.334831] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcb8750) 00:39:23.133 [2024-11-20 14:00:20.334835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:39:23.133 [2024-11-20 14:00:20.334838] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:39:23.133 [2024-11-20 14:00:20.334847] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:39:23.133 [2024-11-20 14:00:20.334851] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.133 [2024-11-20 14:00:20.334853] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcb8750) 00:39:23.133 [2024-11-20 14:00:20.334858] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:23.133 [2024-11-20 14:00:20.334872] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd1c740, cid 0, qid 0 00:39:23.133 [2024-11-20 14:00:20.334875] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd1c8c0, cid 1, qid 0 00:39:23.133 [2024-11-20 14:00:20.334879] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd1ca40, cid 2, qid 0 00:39:23.133 [2024-11-20 14:00:20.334882] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd1cbc0, cid 3, qid 0 00:39:23.133 [2024-11-20 14:00:20.334885] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd1cd40, cid 4, qid 0 00:39:23.133 [2024-11-20 14:00:20.334993] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:39:23.133 [2024-11-20 14:00:20.334997] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:39:23.133 [2024-11-20 14:00:20.335000] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:39:23.133 [2024-11-20 14:00:20.335002] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd1cd40) on tqpair=0xcb8750 00:39:23.133 [2024-11-20 14:00:20.335007] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:39:23.133 [2024-11-20 14:00:20.335010] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:39:23.133 [2024-11-20 14:00:20.335017] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.133 [2024-11-20 14:00:20.335020] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcb8750) 00:39:23.133 [2024-11-20 14:00:20.335024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:23.133 [2024-11-20 14:00:20.335034] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd1cd40, cid 4, qid 0 00:39:23.133 [2024-11-20 14:00:20.335081] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:39:23.133 [2024-11-20 14:00:20.335085] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:39:23.133 [2024-11-20 14:00:20.335088] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:39:23.133 [2024-11-20 14:00:20.335090] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcb8750): datao=0, datal=4096, cccid=4 00:39:23.133 [2024-11-20 14:00:20.335093] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd1cd40) on tqpair(0xcb8750): expected_datao=0, payload_size=4096 00:39:23.133 [2024-11-20 14:00:20.335096] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:39:23.133 [2024-11-20 14:00:20.335101] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:39:23.133 [2024-11-20 14:00:20.335103] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:39:23.133 [2024-11-20 14:00:20.335115] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:39:23.133 [2024-11-20 14:00:20.335119] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:39:23.133 [2024-11-20 14:00:20.335121] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:39:23.133 [2024-11-20 14:00:20.335123] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd1cd40) on tqpair=0xcb8750 00:39:23.133 [2024-11-20 14:00:20.335133] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:39:23.133 [2024-11-20 14:00:20.335159] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.133 [2024-11-20 14:00:20.335163] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcb8750) 00:39:23.133 [2024-11-20 14:00:20.335167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:23.133 [2024-11-20 14:00:20.335172] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:39:23.133 [2024-11-20 14:00:20.335174] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.133 [2024-11-20 14:00:20.335177] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xcb8750) 00:39:23.133 [2024-11-20 14:00:20.335183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:39:23.133 [2024-11-20 14:00:20.335197] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd1cd40, cid 4, qid 0 00:39:23.133 [2024-11-20 14:00:20.335201] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd1cec0, cid 5, qid 0 00:39:23.133 [2024-11-20 14:00:20.335303] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:39:23.133 [2024-11-20 14:00:20.335312] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:39:23.133 [2024-11-20 14:00:20.335314] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:39:23.133 [2024-11-20 14:00:20.335317] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcb8750): datao=0, datal=1024, cccid=4 00:39:23.133 [2024-11-20 14:00:20.335320] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd1cd40) on tqpair(0xcb8750): expected_datao=0, payload_size=1024 00:39:23.133 [2024-11-20 14:00:20.335323] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:39:23.133 [2024-11-20 14:00:20.335327] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:39:23.133 [2024-11-20 14:00:20.335330] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:39:23.133 [2024-11-20 14:00:20.335333] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:39:23.133 [2024-11-20 14:00:20.335337] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:39:23.133 [2024-11-20 14:00:20.335339] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:39:23.133 [2024-11-20 14:00:20.335344] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd1cec0) on tqpair=0xcb8750 00:39:23.133 [2024-11-20 14:00:20.335355] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:39:23.133 [2024-11-20 14:00:20.335359] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:39:23.133 [2024-11-20 14:00:20.335361] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:39:23.133 [2024-11-20 14:00:20.335364] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd1cd40) on tqpair=0xcb8750 00:39:23.133 [2024-11-20 14:00:20.335373] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.133 [2024-11-20 14:00:20.335375] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcb8750) 00:39:23.133 [2024-11-20 14:00:20.335380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:23.133 [2024-11-20 14:00:20.335392] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd1cd40, cid 4, qid 0 00:39:23.133 [2024-11-20 14:00:20.335441] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:39:23.133 [2024-11-20 14:00:20.335446] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:39:23.133 [2024-11-20 14:00:20.335448] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:39:23.133 [2024-11-20 14:00:20.335450] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcb8750): datao=0, datal=3072, cccid=4 00:39:23.133 [2024-11-20 14:00:20.335453] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd1cd40) on tqpair(0xcb8750): expected_datao=0, payload_size=3072 00:39:23.133 [2024-11-20 14:00:20.335456] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:39:23.133 [2024-11-20 14:00:20.335460] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:39:23.133 [2024-11-20 14:00:20.335462] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:39:23.133 [2024-11-20 14:00:20.335470] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:39:23.133 [2024-11-20 14:00:20.335474] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:39:23.133 [2024-11-20 14:00:20.335477] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:39:23.133 [2024-11-20 14:00:20.335479] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd1cd40) on tqpair=0xcb8750 00:39:23.133 [2024-11-20 14:00:20.335485] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.133 [2024-11-20 14:00:20.335487] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcb8750) 00:39:23.133 [2024-11-20 14:00:20.335492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:23.133 [2024-11-20 14:00:20.335504] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd1cd40, cid 4, qid 0 00:39:23.133 [2024-11-20 14:00:20.335555] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:39:23.133 [2024-11-20 14:00:20.335560] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:39:23.133 [2024-11-20 14:00:20.335562] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:39:23.133 [2024-11-20 14:00:20.335564] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcb8750): datao=0, datal=8, cccid=4 00:39:23.134 [2024-11-20 14:00:20.335566] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd1cd40) on tqpair(0xcb8750): expected_datao=0, payload_size=8 00:39:23.134 [2024-11-20 14:00:20.335569] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:39:23.134 [2024-11-20 14:00:20.335573] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:39:23.134 [2024-11-20 14:00:20.335575] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:39:23.134 [2024-11-20 14:00:20.335591] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:39:23.134 [2024-11-20 14:00:20.335596] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:39:23.134 [2024-11-20 14:00:20.335597] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:39:23.134 [2024-11-20 14:00:20.335600] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd1cd40) on tqpair=0xcb8750 00:39:23.134 ===================================================== 00:39:23.134 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:39:23.134 ===================================================== 00:39:23.134 Controller Capabilities/Features 00:39:23.134 ================================ 00:39:23.134 Vendor ID: 0000 00:39:23.134 Subsystem Vendor ID: 0000 00:39:23.134 Serial Number: .................... 00:39:23.134 Model Number: ........................................ 00:39:23.134 Firmware Version: 25.01 00:39:23.134 Recommended Arb Burst: 0 00:39:23.134 IEEE OUI Identifier: 00 00 00 00:39:23.134 Multi-path I/O 00:39:23.134 May have multiple subsystem ports: No 00:39:23.134 May have multiple controllers: No 00:39:23.134 Associated with SR-IOV VF: No 00:39:23.134 Max Data Transfer Size: 131072 00:39:23.134 Max Number of Namespaces: 0 00:39:23.134 Max Number of I/O Queues: 1024 00:39:23.134 NVMe Specification Version (VS): 1.3 00:39:23.134 NVMe Specification Version (Identify): 1.3 00:39:23.134 Maximum Queue Entries: 128 00:39:23.134 Contiguous Queues Required: Yes 00:39:23.134 Arbitration Mechanisms Supported 00:39:23.134 Weighted Round Robin: Not Supported 00:39:23.134 Vendor Specific: Not Supported 00:39:23.134 Reset Timeout: 15000 ms 00:39:23.134 Doorbell Stride: 4 bytes 00:39:23.134 NVM Subsystem Reset: Not Supported 00:39:23.134 Command Sets Supported 00:39:23.134 NVM Command Set: Supported 00:39:23.134 Boot Partition: Not Supported 00:39:23.134 Memory Page Size Minimum: 4096 bytes 00:39:23.134 Memory Page Size Maximum: 4096 bytes 00:39:23.134 Persistent Memory Region: Not Supported 00:39:23.134 Optional Asynchronous Events Supported 00:39:23.134 Namespace Attribute Notices: Not Supported 00:39:23.134 Firmware Activation Notices: Not Supported 00:39:23.134 ANA Change Notices: Not Supported 00:39:23.134 PLE Aggregate Log Change Notices: Not Supported 00:39:23.134 LBA Status Info Alert Notices: Not Supported 00:39:23.134 EGE Aggregate Log Change Notices: Not Supported 00:39:23.134 Normal NVM Subsystem Shutdown event: Not Supported 00:39:23.134 Zone Descriptor Change Notices: Not Supported 00:39:23.134 Discovery Log Change Notices: Supported 00:39:23.134 Controller Attributes 00:39:23.134 128-bit Host Identifier: Not Supported 00:39:23.134 Non-Operational Permissive Mode: Not Supported 00:39:23.134 NVM Sets: Not Supported 00:39:23.134 Read Recovery Levels: Not Supported 00:39:23.134 Endurance Groups: Not Supported 00:39:23.134 Predictable Latency Mode: Not Supported 00:39:23.134 Traffic Based Keep ALive: Not Supported 00:39:23.134 Namespace Granularity: Not Supported 00:39:23.134 SQ Associations: Not Supported 00:39:23.134 UUID List: Not Supported 00:39:23.134 Multi-Domain Subsystem: Not Supported 00:39:23.134 Fixed Capacity Management: Not Supported 00:39:23.134 Variable Capacity Management: Not Supported 00:39:23.134 Delete Endurance Group: Not Supported 00:39:23.134 Delete NVM Set: Not Supported 00:39:23.134 Extended LBA Formats Supported: Not Supported 00:39:23.134 Flexible Data Placement Supported: Not Supported 00:39:23.134 00:39:23.134 Controller Memory Buffer Support 00:39:23.134 ================================ 00:39:23.134 Supported: No 00:39:23.134 00:39:23.134 Persistent Memory Region Support 00:39:23.134 ================================ 00:39:23.134 Supported: No 00:39:23.134 00:39:23.134 Admin Command Set Attributes 00:39:23.134 ============================ 00:39:23.134 Security Send/Receive: Not Supported 00:39:23.134 Format NVM: Not Supported 00:39:23.134 Firmware Activate/Download: Not Supported 00:39:23.134 Namespace Management: Not Supported 00:39:23.134 Device Self-Test: Not Supported 00:39:23.134 Directives: Not Supported 00:39:23.134 NVMe-MI: Not Supported 00:39:23.134 Virtualization Management: Not Supported 00:39:23.134 Doorbell Buffer Config: Not Supported 00:39:23.134 Get LBA Status Capability: Not Supported 00:39:23.134 Command & Feature Lockdown Capability: Not Supported 00:39:23.134 Abort Command Limit: 1 00:39:23.134 Async Event Request Limit: 4 00:39:23.134 Number of Firmware Slots: N/A 00:39:23.134 Firmware Slot 1 Read-Only: N/A 00:39:23.134 Firmware Activation Without Reset: N/A 00:39:23.134 Multiple Update Detection Support: N/A 00:39:23.134 Firmware Update Granularity: No Information Provided 00:39:23.134 Per-Namespace SMART Log: No 00:39:23.134 Asymmetric Namespace Access Log Page: Not Supported 00:39:23.134 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:39:23.134 Command Effects Log Page: Not Supported 00:39:23.134 Get Log Page Extended Data: Supported 00:39:23.134 Telemetry Log Pages: Not Supported 00:39:23.134 Persistent Event Log Pages: Not Supported 00:39:23.134 Supported Log Pages Log Page: May Support 00:39:23.134 Commands Supported & Effects Log Page: Not Supported 00:39:23.134 Feature Identifiers & Effects Log Page:May Support 00:39:23.134 NVMe-MI Commands & Effects Log Page: May Support 00:39:23.134 Data Area 4 for Telemetry Log: Not Supported 00:39:23.134 Error Log Page Entries Supported: 128 00:39:23.134 Keep Alive: Not Supported 00:39:23.134 00:39:23.134 NVM Command Set Attributes 00:39:23.134 ========================== 00:39:23.134 Submission Queue Entry Size 00:39:23.134 Max: 1 00:39:23.134 Min: 1 00:39:23.134 Completion Queue Entry Size 00:39:23.134 Max: 1 00:39:23.134 Min: 1 00:39:23.134 Number of Namespaces: 0 00:39:23.134 Compare Command: Not Supported 00:39:23.134 Write Uncorrectable Command: Not Supported 00:39:23.134 Dataset Management Command: Not Supported 00:39:23.134 Write Zeroes Command: Not Supported 00:39:23.134 Set Features Save Field: Not Supported 00:39:23.134 Reservations: Not Supported 00:39:23.134 Timestamp: Not Supported 00:39:23.134 Copy: Not Supported 00:39:23.134 Volatile Write Cache: Not Present 00:39:23.134 Atomic Write Unit (Normal): 1 00:39:23.134 Atomic Write Unit (PFail): 1 00:39:23.134 Atomic Compare & Write Unit: 1 00:39:23.134 Fused Compare & Write: Supported 00:39:23.134 Scatter-Gather List 00:39:23.134 SGL Command Set: Supported 00:39:23.134 SGL Keyed: Supported 00:39:23.134 SGL Bit Bucket Descriptor: Not Supported 00:39:23.134 SGL Metadata Pointer: Not Supported 00:39:23.134 Oversized SGL: Not Supported 00:39:23.134 SGL Metadata Address: Not Supported 00:39:23.134 SGL Offset: Supported 00:39:23.134 Transport SGL Data Block: Not Supported 00:39:23.134 Replay Protected Memory Block: Not Supported 00:39:23.134 00:39:23.134 Firmware Slot Information 00:39:23.134 ========================= 00:39:23.134 Active slot: 0 00:39:23.134 00:39:23.134 00:39:23.134 Error Log 00:39:23.134 ========= 00:39:23.134 00:39:23.134 Active Namespaces 00:39:23.134 ================= 00:39:23.134 Discovery Log Page 00:39:23.134 ================== 00:39:23.134 Generation Counter: 2 00:39:23.134 Number of Records: 2 00:39:23.134 Record Format: 0 00:39:23.134 00:39:23.134 Discovery Log Entry 0 00:39:23.134 ---------------------- 00:39:23.134 Transport Type: 3 (TCP) 00:39:23.134 Address Family: 1 (IPv4) 00:39:23.134 Subsystem Type: 3 (Current Discovery Subsystem) 00:39:23.134 Entry Flags: 00:39:23.134 Duplicate Returned Information: 1 00:39:23.134 Explicit Persistent Connection Support for Discovery: 1 00:39:23.134 Transport Requirements: 00:39:23.134 Secure Channel: Not Required 00:39:23.134 Port ID: 0 (0x0000) 00:39:23.134 Controller ID: 65535 (0xffff) 00:39:23.134 Admin Max SQ Size: 128 00:39:23.134 Transport Service Identifier: 4420 00:39:23.134 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:39:23.134 Transport Address: 10.0.0.3 00:39:23.134 Discovery Log Entry 1 00:39:23.134 ---------------------- 00:39:23.134 Transport Type: 3 (TCP) 00:39:23.134 Address Family: 1 (IPv4) 00:39:23.134 Subsystem Type: 2 (NVM Subsystem) 00:39:23.134 Entry Flags: 00:39:23.134 Duplicate Returned Information: 0 00:39:23.134 Explicit Persistent Connection Support for Discovery: 0 00:39:23.134 Transport Requirements: 00:39:23.134 Secure Channel: Not Required 00:39:23.134 Port ID: 0 (0x0000) 00:39:23.134 Controller ID: 65535 (0xffff) 00:39:23.134 Admin Max SQ Size: 128 00:39:23.134 Transport Service Identifier: 4420 00:39:23.134 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:39:23.134 Transport Address: 10.0.0.3 [2024-11-20 14:00:20.335691] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:39:23.134 [2024-11-20 14:00:20.335699] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd1c740) on tqpair=0xcb8750 00:39:23.134 [2024-11-20 14:00:20.335704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:23.134 [2024-11-20 14:00:20.335718] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd1c8c0) on tqpair=0xcb8750 00:39:23.134 [2024-11-20 14:00:20.335722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:23.134 [2024-11-20 14:00:20.335725] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd1ca40) on tqpair=0xcb8750 00:39:23.134 [2024-11-20 14:00:20.335728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:23.134 [2024-11-20 14:00:20.335731] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd1cbc0) on tqpair=0xcb8750 00:39:23.134 [2024-11-20 14:00:20.335734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:23.134 [2024-11-20 14:00:20.335740] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:39:23.134 [2024-11-20 14:00:20.335742] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.134 [2024-11-20 14:00:20.335744] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcb8750) 00:39:23.134 [2024-11-20 14:00:20.335749] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:23.134 [2024-11-20 14:00:20.335762] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd1cbc0, cid 3, qid 0 00:39:23.134 [2024-11-20 14:00:20.335803] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:39:23.134 [2024-11-20 14:00:20.335808] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:39:23.134 [2024-11-20 14:00:20.335810] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:39:23.134 [2024-11-20 14:00:20.335813] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd1cbc0) on tqpair=0xcb8750 00:39:23.134 [2024-11-20 14:00:20.335818] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:39:23.134 [2024-11-20 14:00:20.335820] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.134 [2024-11-20 14:00:20.335822] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcb8750) 00:39:23.134 [2024-11-20 14:00:20.335827] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:23.134 [2024-11-20 14:00:20.335838] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd1cbc0, cid 3, qid 0 00:39:23.134 [2024-11-20 14:00:20.335918] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:39:23.134 [2024-11-20 14:00:20.335925] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:39:23.134 [2024-11-20 14:00:20.335927] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:39:23.134 [2024-11-20 14:00:20.335930] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd1cbc0) on tqpair=0xcb8750 00:39:23.134 [2024-11-20 14:00:20.335934] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:39:23.134 [2024-11-20 14:00:20.335938] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:39:23.134 [2024-11-20 14:00:20.335944] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:39:23.134 [2024-11-20 14:00:20.335947] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.134 [2024-11-20 14:00:20.335950] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcb8750) 00:39:23.134 [2024-11-20 14:00:20.335955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:23.134 [2024-11-20 14:00:20.335965] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd1cbc0, cid 3, qid 0 00:39:23.134 [2024-11-20 14:00:20.336012] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:39:23.134 [2024-11-20 14:00:20.336016] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:39:23.134 [2024-11-20 14:00:20.336019] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:39:23.134 [2024-11-20 14:00:20.336021] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd1cbc0) on tqpair=0xcb8750 00:39:23.134 [2024-11-20 14:00:20.336028] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:39:23.134 [2024-11-20 14:00:20.336031] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.134 [2024-11-20 14:00:20.336034] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcb8750) 00:39:23.134 [2024-11-20 14:00:20.336038] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:23.134 [2024-11-20 14:00:20.336048] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd1cbc0, cid 3, qid 0 00:39:23.134 [2024-11-20 14:00:20.336094] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:39:23.134 [2024-11-20 14:00:20.336099] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:39:23.134 [2024-11-20 14:00:20.336101] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:39:23.134 [2024-11-20 14:00:20.336104] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd1cbc0) on tqpair=0xcb8750 00:39:23.134 [2024-11-20 14:00:20.336110] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:39:23.134 [2024-11-20 14:00:20.336113] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.134 [2024-11-20 14:00:20.336115] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcb8750) 00:39:23.134 [2024-11-20 14:00:20.336119] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:23.134 [2024-11-20 14:00:20.336131] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd1cbc0, cid 3, qid 0 00:39:23.135 [2024-11-20 14:00:20.336180] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:39:23.135 [2024-11-20 14:00:20.336184] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:39:23.135 [2024-11-20 14:00:20.336187] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:39:23.135 [2024-11-20 14:00:20.336189] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd1cbc0) on tqpair=0xcb8750 00:39:23.135 [2024-11-20 14:00:20.336196] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:39:23.135 [2024-11-20 14:00:20.336199] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.135 [2024-11-20 14:00:20.336201] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcb8750) 00:39:23.135 [2024-11-20 14:00:20.336206] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:23.135 [2024-11-20 14:00:20.336216] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd1cbc0, cid 3, qid 0 00:39:23.135 [2024-11-20 14:00:20.336268] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:39:23.135 [2024-11-20 14:00:20.336272] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:39:23.135 [2024-11-20 14:00:20.336277] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:39:23.135 [2024-11-20 14:00:20.336279] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd1cbc0) on tqpair=0xcb8750 00:39:23.135 [2024-11-20 14:00:20.336286] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:39:23.135 [2024-11-20 14:00:20.336289] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.135 [2024-11-20 14:00:20.336291] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcb8750) 00:39:23.135 [2024-11-20 14:00:20.336296] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:23.135 [2024-11-20 14:00:20.336306] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd1cbc0, cid 3, qid 0 00:39:23.135 [2024-11-20 14:00:20.336380] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:39:23.135 [2024-11-20 14:00:20.336386] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:39:23.135 [2024-11-20 14:00:20.336388] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:39:23.135 [2024-11-20 14:00:20.336391] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd1cbc0) on tqpair=0xcb8750 00:39:23.135 [2024-11-20 14:00:20.336399] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:39:23.135 [2024-11-20 14:00:20.336402] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.135 [2024-11-20 14:00:20.336405] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcb8750) 00:39:23.135 [2024-11-20 14:00:20.336410] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:23.135 [2024-11-20 14:00:20.336421] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd1cbc0, cid 3, qid 0 00:39:23.135 [2024-11-20 14:00:20.336477] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:39:23.135 [2024-11-20 14:00:20.336482] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:39:23.135 [2024-11-20 14:00:20.336485] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:39:23.135 [2024-11-20 14:00:20.336487] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd1cbc0) on tqpair=0xcb8750 00:39:23.135 [2024-11-20 14:00:20.336495] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:39:23.135 [2024-11-20 14:00:20.336498] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.135 [2024-11-20 14:00:20.336501] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcb8750) 00:39:23.135 [2024-11-20 14:00:20.336506] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:23.135 [2024-11-20 14:00:20.336517] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd1cbc0, cid 3, qid 0 00:39:23.135 [2024-11-20 14:00:20.336579] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:39:23.135 [2024-11-20 14:00:20.336584] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:39:23.135 [2024-11-20 14:00:20.336586] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:39:23.135 [2024-11-20 14:00:20.336589] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd1cbc0) on tqpair=0xcb8750 00:39:23.135 [2024-11-20 14:00:20.336596] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:39:23.135 [2024-11-20 14:00:20.336600] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.135 [2024-11-20 14:00:20.336603] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcb8750) 00:39:23.135 [2024-11-20 14:00:20.336608] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:23.135 [2024-11-20 14:00:20.336619] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd1cbc0, cid 3, qid 0 00:39:23.135 [2024-11-20 14:00:20.339749] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:39:23.135 [2024-11-20 14:00:20.339755] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:39:23.135 [2024-11-20 14:00:20.339758] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:39:23.135 [2024-11-20 14:00:20.339761] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd1cbc0) on tqpair=0xcb8750 00:39:23.135 [2024-11-20 14:00:20.339771] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:39:23.135 [2024-11-20 14:00:20.339775] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.135 [2024-11-20 14:00:20.339778] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcb8750) 00:39:23.135 [2024-11-20 14:00:20.339784] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:23.135 [2024-11-20 14:00:20.339803] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd1cbc0, cid 3, qid 0 00:39:23.135 [2024-11-20 14:00:20.339856] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:39:23.135 [2024-11-20 14:00:20.339862] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:39:23.135 [2024-11-20 14:00:20.339864] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:39:23.135 [2024-11-20 14:00:20.339867] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd1cbc0) on tqpair=0xcb8750 00:39:23.135 [2024-11-20 14:00:20.339873] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 3 milliseconds 00:39:23.135 00:39:23.135 14:00:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:39:23.135 [2024-11-20 14:00:20.383924] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:39:23.135 [2024-11-20 14:00:20.383976] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74178 ] 00:39:23.452 [2024-11-20 14:00:20.526067] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:39:23.452 [2024-11-20 14:00:20.526120] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:39:23.452 [2024-11-20 14:00:20.526124] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:39:23.452 [2024-11-20 14:00:20.526140] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:39:23.452 [2024-11-20 14:00:20.526149] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:39:23.452 [2024-11-20 14:00:20.526476] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:39:23.452 [2024-11-20 14:00:20.526534] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x12db750 0 00:39:23.452 [2024-11-20 14:00:20.533766] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:39:23.452 [2024-11-20 14:00:20.533784] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:39:23.452 [2024-11-20 14:00:20.533788] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:39:23.452 [2024-11-20 14:00:20.533790] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:39:23.452 [2024-11-20 14:00:20.533815] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:39:23.452 [2024-11-20 14:00:20.533819] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.452 [2024-11-20 14:00:20.533822] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12db750) 00:39:23.452 [2024-11-20 14:00:20.533834] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:39:23.452 [2024-11-20 14:00:20.533857] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133f740, cid 0, qid 0 00:39:23.452 [2024-11-20 14:00:20.541732] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:39:23.452 [2024-11-20 14:00:20.541745] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:39:23.452 [2024-11-20 14:00:20.541747] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:39:23.452 [2024-11-20 14:00:20.541750] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133f740) on tqpair=0x12db750 00:39:23.452 [2024-11-20 14:00:20.541760] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:39:23.452 [2024-11-20 14:00:20.541765] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:39:23.452 [2024-11-20 14:00:20.541769] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:39:23.452 [2024-11-20 14:00:20.541781] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:39:23.452 [2024-11-20 14:00:20.541784] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.452 [2024-11-20 14:00:20.541786] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12db750) 00:39:23.452 [2024-11-20 14:00:20.541793] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:23.452 [2024-11-20 14:00:20.541812] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133f740, cid 0, qid 0 00:39:23.452 [2024-11-20 14:00:20.541893] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:39:23.452 [2024-11-20 14:00:20.541899] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:39:23.452 [2024-11-20 14:00:20.541902] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:39:23.452 [2024-11-20 14:00:20.541904] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133f740) on tqpair=0x12db750 00:39:23.452 [2024-11-20 14:00:20.541908] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:39:23.452 [2024-11-20 14:00:20.541913] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:39:23.452 [2024-11-20 14:00:20.541919] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:39:23.452 [2024-11-20 14:00:20.541922] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.452 [2024-11-20 14:00:20.541924] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12db750) 00:39:23.452 [2024-11-20 14:00:20.541929] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:23.452 [2024-11-20 14:00:20.541941] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133f740, cid 0, qid 0 00:39:23.452 [2024-11-20 14:00:20.542008] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:39:23.452 [2024-11-20 14:00:20.542014] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:39:23.452 [2024-11-20 14:00:20.542016] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:39:23.452 [2024-11-20 14:00:20.542018] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133f740) on tqpair=0x12db750 00:39:23.452 [2024-11-20 14:00:20.542022] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:39:23.452 [2024-11-20 14:00:20.542027] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:39:23.452 [2024-11-20 14:00:20.542032] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:39:23.452 [2024-11-20 14:00:20.542035] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.452 [2024-11-20 14:00:20.542037] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12db750) 00:39:23.452 [2024-11-20 14:00:20.542042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:23.452 [2024-11-20 14:00:20.542054] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133f740, cid 0, qid 0 00:39:23.452 [2024-11-20 14:00:20.542100] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:39:23.452 [2024-11-20 14:00:20.542104] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:39:23.452 [2024-11-20 14:00:20.542106] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:39:23.452 [2024-11-20 14:00:20.542109] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133f740) on tqpair=0x12db750 00:39:23.452 [2024-11-20 14:00:20.542113] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:39:23.452 [2024-11-20 14:00:20.542119] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:39:23.452 [2024-11-20 14:00:20.542122] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.452 [2024-11-20 14:00:20.542124] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12db750) 00:39:23.452 [2024-11-20 14:00:20.542128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:23.452 [2024-11-20 14:00:20.542139] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133f740, cid 0, qid 0 00:39:23.452 [2024-11-20 14:00:20.542192] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:39:23.452 [2024-11-20 14:00:20.542197] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:39:23.452 [2024-11-20 14:00:20.542199] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:39:23.452 [2024-11-20 14:00:20.542201] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133f740) on tqpair=0x12db750 00:39:23.452 [2024-11-20 14:00:20.542204] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:39:23.452 [2024-11-20 14:00:20.542208] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:39:23.452 [2024-11-20 14:00:20.542213] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:39:23.452 [2024-11-20 14:00:20.542322] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:39:23.452 [2024-11-20 14:00:20.542326] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:39:23.452 [2024-11-20 14:00:20.542334] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:39:23.452 [2024-11-20 14:00:20.542336] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.452 [2024-11-20 14:00:20.542339] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12db750) 00:39:23.452 [2024-11-20 14:00:20.542344] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:23.452 [2024-11-20 14:00:20.542356] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133f740, cid 0, qid 0 00:39:23.452 [2024-11-20 14:00:20.542403] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:39:23.452 [2024-11-20 14:00:20.542407] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:39:23.452 [2024-11-20 14:00:20.542410] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:39:23.452 [2024-11-20 14:00:20.542412] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133f740) on tqpair=0x12db750 00:39:23.452 [2024-11-20 14:00:20.542416] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:39:23.452 [2024-11-20 14:00:20.542421] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:39:23.453 [2024-11-20 14:00:20.542424] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.453 [2024-11-20 14:00:20.542426] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12db750) 00:39:23.453 [2024-11-20 14:00:20.542431] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:23.453 [2024-11-20 14:00:20.542442] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133f740, cid 0, qid 0 00:39:23.453 [2024-11-20 14:00:20.542523] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:39:23.453 [2024-11-20 14:00:20.542536] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:39:23.453 [2024-11-20 14:00:20.542539] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:39:23.453 [2024-11-20 14:00:20.542541] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133f740) on tqpair=0x12db750 00:39:23.453 [2024-11-20 14:00:20.542545] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:39:23.453 [2024-11-20 14:00:20.542548] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:39:23.453 [2024-11-20 14:00:20.542554] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:39:23.453 [2024-11-20 14:00:20.542565] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:39:23.453 [2024-11-20 14:00:20.542572] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.453 [2024-11-20 14:00:20.542575] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12db750) 00:39:23.453 [2024-11-20 14:00:20.542580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:23.453 [2024-11-20 14:00:20.542592] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133f740, cid 0, qid 0 00:39:23.453 [2024-11-20 14:00:20.542694] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:39:23.453 [2024-11-20 14:00:20.542703] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:39:23.453 [2024-11-20 14:00:20.542714] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:39:23.453 [2024-11-20 14:00:20.542718] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12db750): datao=0, datal=4096, cccid=0 00:39:23.453 [2024-11-20 14:00:20.542721] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x133f740) on tqpair(0x12db750): expected_datao=0, payload_size=4096 00:39:23.453 [2024-11-20 14:00:20.542724] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:39:23.453 [2024-11-20 14:00:20.542731] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:39:23.453 [2024-11-20 14:00:20.542734] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:39:23.453 [2024-11-20 14:00:20.542742] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:39:23.453 [2024-11-20 14:00:20.542746] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:39:23.453 [2024-11-20 14:00:20.542748] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:39:23.453 [2024-11-20 14:00:20.542751] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133f740) on tqpair=0x12db750 00:39:23.453 [2024-11-20 14:00:20.542757] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:39:23.453 [2024-11-20 14:00:20.542760] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:39:23.453 [2024-11-20 14:00:20.542763] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:39:23.453 [2024-11-20 14:00:20.542766] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:39:23.453 [2024-11-20 14:00:20.542769] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:39:23.453 [2024-11-20 14:00:20.542772] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:39:23.453 [2024-11-20 14:00:20.542782] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:39:23.453 [2024-11-20 14:00:20.542787] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:39:23.453 [2024-11-20 14:00:20.542790] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.453 [2024-11-20 14:00:20.542792] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12db750) 00:39:23.453 [2024-11-20 14:00:20.542797] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:39:23.453 [2024-11-20 14:00:20.542810] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133f740, cid 0, qid 0 00:39:23.453 [2024-11-20 14:00:20.542890] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:39:23.453 [2024-11-20 14:00:20.542895] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:39:23.453 [2024-11-20 14:00:20.542897] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:39:23.453 [2024-11-20 14:00:20.542900] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133f740) on tqpair=0x12db750 00:39:23.453 [2024-11-20 14:00:20.542906] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:39:23.453 [2024-11-20 14:00:20.542909] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.453 [2024-11-20 14:00:20.542911] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12db750) 00:39:23.453 [2024-11-20 14:00:20.542915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:39:23.453 [2024-11-20 14:00:20.542919] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:39:23.453 [2024-11-20 14:00:20.542924] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.453 [2024-11-20 14:00:20.542926] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x12db750) 00:39:23.453 [2024-11-20 14:00:20.542931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:39:23.453 [2024-11-20 14:00:20.542935] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:39:23.453 [2024-11-20 14:00:20.542938] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.453 [2024-11-20 14:00:20.542940] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x12db750) 00:39:23.453 [2024-11-20 14:00:20.542944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:39:23.453 [2024-11-20 14:00:20.542948] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:39:23.453 [2024-11-20 14:00:20.542951] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.453 [2024-11-20 14:00:20.542953] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12db750) 00:39:23.453 [2024-11-20 14:00:20.542957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:39:23.453 [2024-11-20 14:00:20.542960] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:39:23.453 [2024-11-20 14:00:20.542969] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:39:23.453 [2024-11-20 14:00:20.542974] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.453 [2024-11-20 14:00:20.542976] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12db750) 00:39:23.453 [2024-11-20 14:00:20.542981] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:23.453 [2024-11-20 14:00:20.542995] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133f740, cid 0, qid 0 00:39:23.453 [2024-11-20 14:00:20.542999] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133f8c0, cid 1, qid 0 00:39:23.453 [2024-11-20 14:00:20.543003] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133fa40, cid 2, qid 0 00:39:23.453 [2024-11-20 14:00:20.543006] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133fbc0, cid 3, qid 0 00:39:23.453 [2024-11-20 14:00:20.543009] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133fd40, cid 4, qid 0 00:39:23.453 [2024-11-20 14:00:20.543104] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:39:23.453 [2024-11-20 14:00:20.543109] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:39:23.453 [2024-11-20 14:00:20.543112] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:39:23.453 [2024-11-20 14:00:20.543116] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133fd40) on tqpair=0x12db750 00:39:23.453 [2024-11-20 14:00:20.543120] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:39:23.453 [2024-11-20 14:00:20.543124] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:39:23.453 [2024-11-20 14:00:20.543129] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:39:23.453 [2024-11-20 14:00:20.543137] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:39:23.453 [2024-11-20 14:00:20.543142] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:39:23.453 [2024-11-20 14:00:20.543145] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.453 [2024-11-20 14:00:20.543147] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12db750) 00:39:23.453 [2024-11-20 14:00:20.543152] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:39:23.453 [2024-11-20 14:00:20.543164] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133fd40, cid 4, qid 0 00:39:23.453 [2024-11-20 14:00:20.543219] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:39:23.453 [2024-11-20 14:00:20.543224] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:39:23.453 [2024-11-20 14:00:20.543226] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:39:23.453 [2024-11-20 14:00:20.543229] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133fd40) on tqpair=0x12db750 00:39:23.453 [2024-11-20 14:00:20.543282] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:39:23.453 [2024-11-20 14:00:20.543294] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:39:23.453 [2024-11-20 14:00:20.543301] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.453 [2024-11-20 14:00:20.543304] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12db750) 00:39:23.453 [2024-11-20 14:00:20.543309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:23.453 [2024-11-20 14:00:20.543321] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133fd40, cid 4, qid 0 00:39:23.453 [2024-11-20 14:00:20.543391] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:39:23.453 [2024-11-20 14:00:20.543397] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:39:23.453 [2024-11-20 14:00:20.543399] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:39:23.454 [2024-11-20 14:00:20.543402] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12db750): datao=0, datal=4096, cccid=4 00:39:23.454 [2024-11-20 14:00:20.543405] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x133fd40) on tqpair(0x12db750): expected_datao=0, payload_size=4096 00:39:23.454 [2024-11-20 14:00:20.543408] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:39:23.454 [2024-11-20 14:00:20.543413] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:39:23.454 [2024-11-20 14:00:20.543415] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:39:23.454 [2024-11-20 14:00:20.543421] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:39:23.454 [2024-11-20 14:00:20.543426] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:39:23.454 [2024-11-20 14:00:20.543428] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:39:23.454 [2024-11-20 14:00:20.543431] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133fd40) on tqpair=0x12db750 00:39:23.454 [2024-11-20 14:00:20.543443] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:39:23.454 [2024-11-20 14:00:20.543451] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:39:23.454 [2024-11-20 14:00:20.543459] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:39:23.454 [2024-11-20 14:00:20.543463] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.454 [2024-11-20 14:00:20.543466] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12db750) 00:39:23.454 [2024-11-20 14:00:20.543471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:23.454 [2024-11-20 14:00:20.543483] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133fd40, cid 4, qid 0 00:39:23.454 [2024-11-20 14:00:20.543556] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:39:23.454 [2024-11-20 14:00:20.543562] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:39:23.454 [2024-11-20 14:00:20.543565] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:39:23.454 [2024-11-20 14:00:20.543567] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12db750): datao=0, datal=4096, cccid=4 00:39:23.454 [2024-11-20 14:00:20.543570] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x133fd40) on tqpair(0x12db750): expected_datao=0, payload_size=4096 00:39:23.454 [2024-11-20 14:00:20.543573] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:39:23.454 [2024-11-20 14:00:20.543578] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:39:23.454 [2024-11-20 14:00:20.543581] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:39:23.454 [2024-11-20 14:00:20.543587] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:39:23.454 [2024-11-20 14:00:20.543591] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:39:23.454 [2024-11-20 14:00:20.543594] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:39:23.454 [2024-11-20 14:00:20.543596] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133fd40) on tqpair=0x12db750 00:39:23.454 [2024-11-20 14:00:20.543611] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:39:23.454 [2024-11-20 14:00:20.543618] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:39:23.454 [2024-11-20 14:00:20.543623] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.454 [2024-11-20 14:00:20.543625] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12db750) 00:39:23.454 [2024-11-20 14:00:20.543630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:23.454 [2024-11-20 14:00:20.543642] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133fd40, cid 4, qid 0 00:39:23.454 [2024-11-20 14:00:20.543728] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:39:23.454 [2024-11-20 14:00:20.543733] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:39:23.454 [2024-11-20 14:00:20.543735] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:39:23.454 [2024-11-20 14:00:20.543738] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12db750): datao=0, datal=4096, cccid=4 00:39:23.454 [2024-11-20 14:00:20.543741] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x133fd40) on tqpair(0x12db750): expected_datao=0, payload_size=4096 00:39:23.454 [2024-11-20 14:00:20.543744] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:39:23.454 [2024-11-20 14:00:20.543748] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:39:23.454 [2024-11-20 14:00:20.543750] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:39:23.454 [2024-11-20 14:00:20.543760] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:39:23.454 [2024-11-20 14:00:20.543764] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:39:23.454 [2024-11-20 14:00:20.543767] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:39:23.454 [2024-11-20 14:00:20.543772] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133fd40) on tqpair=0x12db750 00:39:23.454 [2024-11-20 14:00:20.543777] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:39:23.454 [2024-11-20 14:00:20.543783] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:39:23.454 [2024-11-20 14:00:20.543791] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:39:23.454 [2024-11-20 14:00:20.543796] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:39:23.454 [2024-11-20 14:00:20.543800] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:39:23.454 [2024-11-20 14:00:20.543804] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:39:23.454 [2024-11-20 14:00:20.543808] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:39:23.454 [2024-11-20 14:00:20.543811] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:39:23.454 [2024-11-20 14:00:20.543815] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:39:23.454 [2024-11-20 14:00:20.543830] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.454 [2024-11-20 14:00:20.543832] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12db750) 00:39:23.454 [2024-11-20 14:00:20.543839] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:23.454 [2024-11-20 14:00:20.543844] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:39:23.454 [2024-11-20 14:00:20.543847] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.454 [2024-11-20 14:00:20.543849] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x12db750) 00:39:23.454 [2024-11-20 14:00:20.543854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:39:23.454 [2024-11-20 14:00:20.543871] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133fd40, cid 4, qid 0 00:39:23.454 [2024-11-20 14:00:20.543875] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133fec0, cid 5, qid 0 00:39:23.454 [2024-11-20 14:00:20.543945] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:39:23.454 [2024-11-20 14:00:20.543949] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:39:23.454 [2024-11-20 14:00:20.543951] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:39:23.454 [2024-11-20 14:00:20.543954] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133fd40) on tqpair=0x12db750 00:39:23.454 [2024-11-20 14:00:20.543959] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:39:23.454 [2024-11-20 14:00:20.543963] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:39:23.454 [2024-11-20 14:00:20.543965] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:39:23.454 [2024-11-20 14:00:20.543967] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133fec0) on tqpair=0x12db750 00:39:23.454 [2024-11-20 14:00:20.543976] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.454 [2024-11-20 14:00:20.543978] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x12db750) 00:39:23.454 [2024-11-20 14:00:20.543983] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:23.454 [2024-11-20 14:00:20.543995] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133fec0, cid 5, qid 0 00:39:23.454 [2024-11-20 14:00:20.544062] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:39:23.454 [2024-11-20 14:00:20.544066] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:39:23.454 [2024-11-20 14:00:20.544070] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:39:23.454 [2024-11-20 14:00:20.544072] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133fec0) on tqpair=0x12db750 00:39:23.454 [2024-11-20 14:00:20.544078] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.454 [2024-11-20 14:00:20.544081] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x12db750) 00:39:23.454 [2024-11-20 14:00:20.544085] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:23.454 [2024-11-20 14:00:20.544097] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133fec0, cid 5, qid 0 00:39:23.454 [2024-11-20 14:00:20.544160] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:39:23.454 [2024-11-20 14:00:20.544165] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:39:23.454 [2024-11-20 14:00:20.544167] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:39:23.454 [2024-11-20 14:00:20.544169] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133fec0) on tqpair=0x12db750 00:39:23.454 [2024-11-20 14:00:20.544175] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.454 [2024-11-20 14:00:20.544180] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x12db750) 00:39:23.454 [2024-11-20 14:00:20.544184] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:23.454 [2024-11-20 14:00:20.544195] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133fec0, cid 5, qid 0 00:39:23.454 [2024-11-20 14:00:20.544248] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:39:23.454 [2024-11-20 14:00:20.544252] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:39:23.455 [2024-11-20 14:00:20.544255] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:39:23.455 [2024-11-20 14:00:20.544257] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133fec0) on tqpair=0x12db750 00:39:23.455 [2024-11-20 14:00:20.544269] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.455 [2024-11-20 14:00:20.544272] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x12db750) 00:39:23.455 [2024-11-20 14:00:20.544276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:23.455 [2024-11-20 14:00:20.544281] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.455 [2024-11-20 14:00:20.544283] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12db750) 00:39:23.455 [2024-11-20 14:00:20.544288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:23.455 [2024-11-20 14:00:20.544293] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.455 [2024-11-20 14:00:20.544295] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x12db750) 00:39:23.455 [2024-11-20 14:00:20.544299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:23.455 [2024-11-20 14:00:20.544304] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.455 [2024-11-20 14:00:20.544307] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x12db750) 00:39:23.455 [2024-11-20 14:00:20.544311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:23.455 [2024-11-20 14:00:20.544324] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133fec0, cid 5, qid 0 00:39:23.455 [2024-11-20 14:00:20.544328] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133fd40, cid 4, qid 0 00:39:23.455 [2024-11-20 14:00:20.544331] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1340040, cid 6, qid 0 00:39:23.455 [2024-11-20 14:00:20.544334] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13401c0, cid 7, qid 0 00:39:23.455 [2024-11-20 14:00:20.544475] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:39:23.455 [2024-11-20 14:00:20.544484] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:39:23.455 [2024-11-20 14:00:20.544486] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:39:23.455 [2024-11-20 14:00:20.544488] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12db750): datao=0, datal=8192, cccid=5 00:39:23.455 [2024-11-20 14:00:20.544491] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x133fec0) on tqpair(0x12db750): expected_datao=0, payload_size=8192 00:39:23.455 [2024-11-20 14:00:20.544494] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:39:23.455 [2024-11-20 14:00:20.544506] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:39:23.455 [2024-11-20 14:00:20.544509] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:39:23.455 [2024-11-20 14:00:20.544514] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:39:23.455 [2024-11-20 14:00:20.544518] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:39:23.455 [2024-11-20 14:00:20.544520] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:39:23.455 [2024-11-20 14:00:20.544523] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12db750): datao=0, datal=512, cccid=4 00:39:23.455 [2024-11-20 14:00:20.544526] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x133fd40) on tqpair(0x12db750): expected_datao=0, payload_size=512 00:39:23.455 [2024-11-20 14:00:20.544528] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:39:23.455 [2024-11-20 14:00:20.544532] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:39:23.455 [2024-11-20 14:00:20.544534] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:39:23.455 [2024-11-20 14:00:20.544540] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:39:23.455 [2024-11-20 14:00:20.544544] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:39:23.455 [2024-11-20 14:00:20.544546] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:39:23.455 [2024-11-20 14:00:20.544548] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12db750): datao=0, datal=512, cccid=6 00:39:23.455 [2024-11-20 14:00:20.544551] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1340040) on tqpair(0x12db750): expected_datao=0, payload_size=512 00:39:23.455 [2024-11-20 14:00:20.544554] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:39:23.455 [2024-11-20 14:00:20.544558] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:39:23.455 [2024-11-20 14:00:20.544560] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:39:23.455 [2024-11-20 14:00:20.544564] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:39:23.455 [2024-11-20 14:00:20.544567] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:39:23.455 [2024-11-20 14:00:20.544569] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:39:23.455 [2024-11-20 14:00:20.544571] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12db750): datao=0, datal=4096, cccid=7 00:39:23.455 [2024-11-20 14:00:20.544574] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13401c0) on tqpair(0x12db750): expected_datao=0, payload_size=4096 00:39:23.455 [2024-11-20 14:00:20.544577] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:39:23.455 [2024-11-20 14:00:20.544581] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:39:23.455 [2024-11-20 14:00:20.544584] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:39:23.455 [2024-11-20 14:00:20.544589] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:39:23.455 [2024-11-20 14:00:20.544593] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:39:23.455 [2024-11-20 14:00:20.544595] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:39:23.455 [2024-11-20 14:00:20.544597] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133fec0) on tqpair=0x12db750 00:39:23.455 [2024-11-20 14:00:20.544608] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:39:23.455 [2024-11-20 14:00:20.544612] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:39:23.455 [2024-11-20 14:00:20.544614] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:39:23.455 [2024-11-20 14:00:20.544616] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133fd40) on tqpair=0x12db750 00:39:23.455 [2024-11-20 14:00:20.544626] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:39:23.455 [2024-11-20 14:00:20.544630] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:39:23.455 [2024-11-20 14:00:20.544632] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:39:23.455 [2024-11-20 14:00:20.544634] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1340040) on tqpair=0x12db750 00:39:23.455 [2024-11-20 14:00:20.544639] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:39:23.455 [2024-11-20 14:00:20.544643] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:39:23.455 [2024-11-20 14:00:20.544645] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:39:23.455 [2024-11-20 14:00:20.544647] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13401c0) on tqpair=0x12db750 00:39:23.455 ===================================================== 00:39:23.455 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:39:23.455 ===================================================== 00:39:23.455 Controller Capabilities/Features 00:39:23.455 ================================ 00:39:23.455 Vendor ID: 8086 00:39:23.455 Subsystem Vendor ID: 8086 00:39:23.455 Serial Number: SPDK00000000000001 00:39:23.455 Model Number: SPDK bdev Controller 00:39:23.455 Firmware Version: 25.01 00:39:23.455 Recommended Arb Burst: 6 00:39:23.455 IEEE OUI Identifier: e4 d2 5c 00:39:23.455 Multi-path I/O 00:39:23.455 May have multiple subsystem ports: Yes 00:39:23.455 May have multiple controllers: Yes 00:39:23.455 Associated with SR-IOV VF: No 00:39:23.455 Max Data Transfer Size: 131072 00:39:23.455 Max Number of Namespaces: 32 00:39:23.455 Max Number of I/O Queues: 127 00:39:23.455 NVMe Specification Version (VS): 1.3 00:39:23.455 NVMe Specification Version (Identify): 1.3 00:39:23.455 Maximum Queue Entries: 128 00:39:23.455 Contiguous Queues Required: Yes 00:39:23.455 Arbitration Mechanisms Supported 00:39:23.455 Weighted Round Robin: Not Supported 00:39:23.455 Vendor Specific: Not Supported 00:39:23.455 Reset Timeout: 15000 ms 00:39:23.455 Doorbell Stride: 4 bytes 00:39:23.455 NVM Subsystem Reset: Not Supported 00:39:23.455 Command Sets Supported 00:39:23.455 NVM Command Set: Supported 00:39:23.455 Boot Partition: Not Supported 00:39:23.455 Memory Page Size Minimum: 4096 bytes 00:39:23.455 Memory Page Size Maximum: 4096 bytes 00:39:23.455 Persistent Memory Region: Not Supported 00:39:23.455 Optional Asynchronous Events Supported 00:39:23.455 Namespace Attribute Notices: Supported 00:39:23.455 Firmware Activation Notices: Not Supported 00:39:23.455 ANA Change Notices: Not Supported 00:39:23.455 PLE Aggregate Log Change Notices: Not Supported 00:39:23.455 LBA Status Info Alert Notices: Not Supported 00:39:23.455 EGE Aggregate Log Change Notices: Not Supported 00:39:23.455 Normal NVM Subsystem Shutdown event: Not Supported 00:39:23.456 Zone Descriptor Change Notices: Not Supported 00:39:23.456 Discovery Log Change Notices: Not Supported 00:39:23.456 Controller Attributes 00:39:23.456 128-bit Host Identifier: Supported 00:39:23.456 Non-Operational Permissive Mode: Not Supported 00:39:23.456 NVM Sets: Not Supported 00:39:23.456 Read Recovery Levels: Not Supported 00:39:23.456 Endurance Groups: Not Supported 00:39:23.456 Predictable Latency Mode: Not Supported 00:39:23.456 Traffic Based Keep ALive: Not Supported 00:39:23.456 Namespace Granularity: Not Supported 00:39:23.456 SQ Associations: Not Supported 00:39:23.456 UUID List: Not Supported 00:39:23.456 Multi-Domain Subsystem: Not Supported 00:39:23.456 Fixed Capacity Management: Not Supported 00:39:23.456 Variable Capacity Management: Not Supported 00:39:23.456 Delete Endurance Group: Not Supported 00:39:23.456 Delete NVM Set: Not Supported 00:39:23.456 Extended LBA Formats Supported: Not Supported 00:39:23.456 Flexible Data Placement Supported: Not Supported 00:39:23.456 00:39:23.456 Controller Memory Buffer Support 00:39:23.456 ================================ 00:39:23.456 Supported: No 00:39:23.456 00:39:23.456 Persistent Memory Region Support 00:39:23.456 ================================ 00:39:23.456 Supported: No 00:39:23.456 00:39:23.456 Admin Command Set Attributes 00:39:23.456 ============================ 00:39:23.456 Security Send/Receive: Not Supported 00:39:23.456 Format NVM: Not Supported 00:39:23.456 Firmware Activate/Download: Not Supported 00:39:23.456 Namespace Management: Not Supported 00:39:23.456 Device Self-Test: Not Supported 00:39:23.456 Directives: Not Supported 00:39:23.456 NVMe-MI: Not Supported 00:39:23.456 Virtualization Management: Not Supported 00:39:23.456 Doorbell Buffer Config: Not Supported 00:39:23.456 Get LBA Status Capability: Not Supported 00:39:23.456 Command & Feature Lockdown Capability: Not Supported 00:39:23.456 Abort Command Limit: 4 00:39:23.456 Async Event Request Limit: 4 00:39:23.456 Number of Firmware Slots: N/A 00:39:23.456 Firmware Slot 1 Read-Only: N/A 00:39:23.456 Firmware Activation Without Reset: N/A 00:39:23.456 Multiple Update Detection Support: N/A 00:39:23.456 Firmware Update Granularity: No Information Provided 00:39:23.456 Per-Namespace SMART Log: No 00:39:23.456 Asymmetric Namespace Access Log Page: Not Supported 00:39:23.456 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:39:23.456 Command Effects Log Page: Supported 00:39:23.456 Get Log Page Extended Data: Supported 00:39:23.456 Telemetry Log Pages: Not Supported 00:39:23.456 Persistent Event Log Pages: Not Supported 00:39:23.456 Supported Log Pages Log Page: May Support 00:39:23.456 Commands Supported & Effects Log Page: Not Supported 00:39:23.456 Feature Identifiers & Effects Log Page:May Support 00:39:23.456 NVMe-MI Commands & Effects Log Page: May Support 00:39:23.456 Data Area 4 for Telemetry Log: Not Supported 00:39:23.456 Error Log Page Entries Supported: 128 00:39:23.456 Keep Alive: Supported 00:39:23.456 Keep Alive Granularity: 10000 ms 00:39:23.456 00:39:23.456 NVM Command Set Attributes 00:39:23.456 ========================== 00:39:23.456 Submission Queue Entry Size 00:39:23.456 Max: 64 00:39:23.456 Min: 64 00:39:23.456 Completion Queue Entry Size 00:39:23.456 Max: 16 00:39:23.456 Min: 16 00:39:23.456 Number of Namespaces: 32 00:39:23.456 Compare Command: Supported 00:39:23.456 Write Uncorrectable Command: Not Supported 00:39:23.456 Dataset Management Command: Supported 00:39:23.456 Write Zeroes Command: Supported 00:39:23.456 Set Features Save Field: Not Supported 00:39:23.456 Reservations: Supported 00:39:23.456 Timestamp: Not Supported 00:39:23.456 Copy: Supported 00:39:23.456 Volatile Write Cache: Present 00:39:23.456 Atomic Write Unit (Normal): 1 00:39:23.456 Atomic Write Unit (PFail): 1 00:39:23.456 Atomic Compare & Write Unit: 1 00:39:23.456 Fused Compare & Write: Supported 00:39:23.456 Scatter-Gather List 00:39:23.456 SGL Command Set: Supported 00:39:23.456 SGL Keyed: Supported 00:39:23.456 SGL Bit Bucket Descriptor: Not Supported 00:39:23.456 SGL Metadata Pointer: Not Supported 00:39:23.456 Oversized SGL: Not Supported 00:39:23.456 SGL Metadata Address: Not Supported 00:39:23.456 SGL Offset: Supported 00:39:23.456 Transport SGL Data Block: Not Supported 00:39:23.456 Replay Protected Memory Block: Not Supported 00:39:23.456 00:39:23.456 Firmware Slot Information 00:39:23.456 ========================= 00:39:23.456 Active slot: 1 00:39:23.456 Slot 1 Firmware Revision: 25.01 00:39:23.456 00:39:23.456 00:39:23.456 Commands Supported and Effects 00:39:23.456 ============================== 00:39:23.456 Admin Commands 00:39:23.456 -------------- 00:39:23.456 Get Log Page (02h): Supported 00:39:23.456 Identify (06h): Supported 00:39:23.456 Abort (08h): Supported 00:39:23.456 Set Features (09h): Supported 00:39:23.456 Get Features (0Ah): Supported 00:39:23.456 Asynchronous Event Request (0Ch): Supported 00:39:23.456 Keep Alive (18h): Supported 00:39:23.456 I/O Commands 00:39:23.456 ------------ 00:39:23.456 Flush (00h): Supported LBA-Change 00:39:23.456 Write (01h): Supported LBA-Change 00:39:23.456 Read (02h): Supported 00:39:23.456 Compare (05h): Supported 00:39:23.456 Write Zeroes (08h): Supported LBA-Change 00:39:23.456 Dataset Management (09h): Supported LBA-Change 00:39:23.456 Copy (19h): Supported LBA-Change 00:39:23.456 00:39:23.456 Error Log 00:39:23.456 ========= 00:39:23.456 00:39:23.456 Arbitration 00:39:23.456 =========== 00:39:23.456 Arbitration Burst: 1 00:39:23.456 00:39:23.456 Power Management 00:39:23.456 ================ 00:39:23.456 Number of Power States: 1 00:39:23.456 Current Power State: Power State #0 00:39:23.456 Power State #0: 00:39:23.456 Max Power: 0.00 W 00:39:23.456 Non-Operational State: Operational 00:39:23.456 Entry Latency: Not Reported 00:39:23.456 Exit Latency: Not Reported 00:39:23.456 Relative Read Throughput: 0 00:39:23.456 Relative Read Latency: 0 00:39:23.456 Relative Write Throughput: 0 00:39:23.456 Relative Write Latency: 0 00:39:23.456 Idle Power: Not Reported 00:39:23.456 Active Power: Not Reported 00:39:23.456 Non-Operational Permissive Mode: Not Supported 00:39:23.456 00:39:23.456 Health Information 00:39:23.456 ================== 00:39:23.456 Critical Warnings: 00:39:23.456 Available Spare Space: OK 00:39:23.456 Temperature: OK 00:39:23.456 Device Reliability: OK 00:39:23.456 Read Only: No 00:39:23.456 Volatile Memory Backup: OK 00:39:23.456 Current Temperature: 0 Kelvin (-273 Celsius) 00:39:23.456 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:39:23.456 Available Spare: 0% 00:39:23.456 Available Spare Threshold: 0% 00:39:23.456 Life Percentage Used:[2024-11-20 14:00:20.544744] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.456 [2024-11-20 14:00:20.544748] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x12db750) 00:39:23.456 [2024-11-20 14:00:20.544753] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:23.456 [2024-11-20 14:00:20.544770] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13401c0, cid 7, qid 0 00:39:23.456 [2024-11-20 14:00:20.544827] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:39:23.456 [2024-11-20 14:00:20.544832] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:39:23.456 [2024-11-20 14:00:20.544834] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:39:23.456 [2024-11-20 14:00:20.544836] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13401c0) on tqpair=0x12db750 00:39:23.456 [2024-11-20 14:00:20.544863] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:39:23.456 [2024-11-20 14:00:20.544870] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133f740) on tqpair=0x12db750 00:39:23.456 [2024-11-20 14:00:20.544875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:23.456 [2024-11-20 14:00:20.544879] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133f8c0) on tqpair=0x12db750 00:39:23.456 [2024-11-20 14:00:20.544882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:23.456 [2024-11-20 14:00:20.544885] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133fa40) on tqpair=0x12db750 00:39:23.456 [2024-11-20 14:00:20.544888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:23.456 [2024-11-20 14:00:20.544891] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133fbc0) on tqpair=0x12db750 00:39:23.457 [2024-11-20 14:00:20.544894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:23.457 [2024-11-20 14:00:20.544900] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:39:23.457 [2024-11-20 14:00:20.544903] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.457 [2024-11-20 14:00:20.544905] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12db750) 00:39:23.457 [2024-11-20 14:00:20.544910] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:23.457 [2024-11-20 14:00:20.544924] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133fbc0, cid 3, qid 0 00:39:23.457 [2024-11-20 14:00:20.544986] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:39:23.457 [2024-11-20 14:00:20.544991] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:39:23.457 [2024-11-20 14:00:20.544993] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:39:23.457 [2024-11-20 14:00:20.544995] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133fbc0) on tqpair=0x12db750 00:39:23.457 [2024-11-20 14:00:20.545001] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:39:23.457 [2024-11-20 14:00:20.545003] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.457 [2024-11-20 14:00:20.545005] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12db750) 00:39:23.457 [2024-11-20 14:00:20.545010] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:23.457 [2024-11-20 14:00:20.545024] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133fbc0, cid 3, qid 0 00:39:23.457 [2024-11-20 14:00:20.545090] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:39:23.457 [2024-11-20 14:00:20.545094] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:39:23.457 [2024-11-20 14:00:20.545097] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:39:23.457 [2024-11-20 14:00:20.545100] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133fbc0) on tqpair=0x12db750 00:39:23.457 [2024-11-20 14:00:20.545103] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:39:23.457 [2024-11-20 14:00:20.545107] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:39:23.457 [2024-11-20 14:00:20.545113] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:39:23.457 [2024-11-20 14:00:20.545115] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.457 [2024-11-20 14:00:20.545118] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12db750) 00:39:23.457 [2024-11-20 14:00:20.545122] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:23.457 [2024-11-20 14:00:20.545134] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133fbc0, cid 3, qid 0 00:39:23.457 [2024-11-20 14:00:20.545187] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:39:23.457 [2024-11-20 14:00:20.545191] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:39:23.457 [2024-11-20 14:00:20.545193] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:39:23.457 [2024-11-20 14:00:20.545196] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133fbc0) on tqpair=0x12db750 00:39:23.457 [2024-11-20 14:00:20.545203] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:39:23.457 [2024-11-20 14:00:20.545205] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.457 [2024-11-20 14:00:20.545207] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12db750) 00:39:23.457 [2024-11-20 14:00:20.545212] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:23.457 [2024-11-20 14:00:20.545223] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133fbc0, cid 3, qid 0 00:39:23.457 [2024-11-20 14:00:20.545285] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:39:23.457 [2024-11-20 14:00:20.545289] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:39:23.457 [2024-11-20 14:00:20.545291] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:39:23.457 [2024-11-20 14:00:20.545294] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133fbc0) on tqpair=0x12db750 00:39:23.457 [2024-11-20 14:00:20.545300] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:39:23.457 [2024-11-20 14:00:20.545303] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.457 [2024-11-20 14:00:20.545305] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12db750) 00:39:23.457 [2024-11-20 14:00:20.545309] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:23.457 [2024-11-20 14:00:20.545320] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133fbc0, cid 3, qid 0 00:39:23.457 [2024-11-20 14:00:20.545365] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:39:23.457 [2024-11-20 14:00:20.545369] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:39:23.457 [2024-11-20 14:00:20.545372] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:39:23.457 [2024-11-20 14:00:20.545374] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133fbc0) on tqpair=0x12db750 00:39:23.457 [2024-11-20 14:00:20.545380] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:39:23.457 [2024-11-20 14:00:20.545382] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.457 [2024-11-20 14:00:20.545385] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12db750) 00:39:23.457 [2024-11-20 14:00:20.545389] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:23.457 [2024-11-20 14:00:20.545401] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133fbc0, cid 3, qid 0 00:39:23.457 [2024-11-20 14:00:20.545450] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:39:23.457 [2024-11-20 14:00:20.545454] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:39:23.457 [2024-11-20 14:00:20.545456] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:39:23.457 [2024-11-20 14:00:20.545458] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133fbc0) on tqpair=0x12db750 00:39:23.457 [2024-11-20 14:00:20.545465] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:39:23.457 [2024-11-20 14:00:20.545467] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.457 [2024-11-20 14:00:20.545470] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12db750) 00:39:23.457 [2024-11-20 14:00:20.545474] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:23.457 [2024-11-20 14:00:20.545485] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133fbc0, cid 3, qid 0 00:39:23.457 [2024-11-20 14:00:20.545555] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:39:23.457 [2024-11-20 14:00:20.545559] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:39:23.457 [2024-11-20 14:00:20.545563] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:39:23.457 [2024-11-20 14:00:20.545565] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133fbc0) on tqpair=0x12db750 00:39:23.457 [2024-11-20 14:00:20.545572] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:39:23.457 [2024-11-20 14:00:20.545574] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.457 [2024-11-20 14:00:20.545577] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12db750) 00:39:23.457 [2024-11-20 14:00:20.545581] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:23.457 [2024-11-20 14:00:20.545592] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133fbc0, cid 3, qid 0 00:39:23.457 [2024-11-20 14:00:20.545641] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:39:23.457 [2024-11-20 14:00:20.545645] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:39:23.457 [2024-11-20 14:00:20.545647] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:39:23.457 [2024-11-20 14:00:20.545649] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133fbc0) on tqpair=0x12db750 00:39:23.457 [2024-11-20 14:00:20.545656] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:39:23.457 [2024-11-20 14:00:20.545658] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.457 [2024-11-20 14:00:20.545661] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12db750) 00:39:23.457 [2024-11-20 14:00:20.545665] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:23.457 [2024-11-20 14:00:20.545676] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133fbc0, cid 3, qid 0 00:39:23.457 [2024-11-20 14:00:20.549723] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:39:23.457 [2024-11-20 14:00:20.549735] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:39:23.457 [2024-11-20 14:00:20.549738] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:39:23.457 [2024-11-20 14:00:20.549740] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133fbc0) on tqpair=0x12db750 00:39:23.457 [2024-11-20 14:00:20.549748] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:39:23.458 [2024-11-20 14:00:20.549751] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:39:23.458 [2024-11-20 14:00:20.549753] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12db750) 00:39:23.458 [2024-11-20 14:00:20.549758] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:23.458 [2024-11-20 14:00:20.549773] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x133fbc0, cid 3, qid 0 00:39:23.458 [2024-11-20 14:00:20.549820] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:39:23.458 [2024-11-20 14:00:20.549824] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:39:23.458 [2024-11-20 14:00:20.549827] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:39:23.458 [2024-11-20 14:00:20.549829] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x133fbc0) on tqpair=0x12db750 00:39:23.458 [2024-11-20 14:00:20.549834] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:39:23.458 0% 00:39:23.458 Data Units Read: 0 00:39:23.458 Data Units Written: 0 00:39:23.458 Host Read Commands: 0 00:39:23.458 Host Write Commands: 0 00:39:23.458 Controller Busy Time: 0 minutes 00:39:23.458 Power Cycles: 0 00:39:23.458 Power On Hours: 0 hours 00:39:23.458 Unsafe Shutdowns: 0 00:39:23.458 Unrecoverable Media Errors: 0 00:39:23.458 Lifetime Error Log Entries: 0 00:39:23.458 Warning Temperature Time: 0 minutes 00:39:23.458 Critical Temperature Time: 0 minutes 00:39:23.458 00:39:23.458 Number of Queues 00:39:23.458 ================ 00:39:23.458 Number of I/O Submission Queues: 127 00:39:23.458 Number of I/O Completion Queues: 127 00:39:23.458 00:39:23.458 Active Namespaces 00:39:23.458 ================= 00:39:23.458 Namespace ID:1 00:39:23.458 Error Recovery Timeout: Unlimited 00:39:23.458 Command Set Identifier: NVM (00h) 00:39:23.458 Deallocate: Supported 00:39:23.458 Deallocated/Unwritten Error: Not Supported 00:39:23.458 Deallocated Read Value: Unknown 00:39:23.458 Deallocate in Write Zeroes: Not Supported 00:39:23.458 Deallocated Guard Field: 0xFFFF 00:39:23.458 Flush: Supported 00:39:23.458 Reservation: Supported 00:39:23.458 Namespace Sharing Capabilities: Multiple Controllers 00:39:23.458 Size (in LBAs): 131072 (0GiB) 00:39:23.458 Capacity (in LBAs): 131072 (0GiB) 00:39:23.458 Utilization (in LBAs): 131072 (0GiB) 00:39:23.458 NGUID: ABCDEF0123456789ABCDEF0123456789 00:39:23.458 EUI64: ABCDEF0123456789 00:39:23.458 UUID: e11a4505-f571-4dfc-9882-b16d55ce3637 00:39:23.458 Thin Provisioning: Not Supported 00:39:23.458 Per-NS Atomic Units: Yes 00:39:23.458 Atomic Boundary Size (Normal): 0 00:39:23.458 Atomic Boundary Size (PFail): 0 00:39:23.458 Atomic Boundary Offset: 0 00:39:23.458 Maximum Single Source Range Length: 65535 00:39:23.458 Maximum Copy Length: 65535 00:39:23.458 Maximum Source Range Count: 1 00:39:23.458 NGUID/EUI64 Never Reused: No 00:39:23.458 Namespace Write Protected: No 00:39:23.458 Number of LBA Formats: 1 00:39:23.458 Current LBA Format: LBA Format #00 00:39:23.458 LBA Format #00: Data Size: 512 Metadata Size: 0 00:39:23.458 00:39:23.458 14:00:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:39:24.028 14:00:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:24.028 14:00:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:24.028 14:00:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:39:24.028 14:00:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:24.028 14:00:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:39:24.028 14:00:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:39:24.028 14:00:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:24.028 14:00:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:39:24.028 14:00:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:24.028 14:00:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:39:24.028 14:00:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:24.028 14:00:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:24.028 rmmod nvme_tcp 00:39:24.028 rmmod nvme_fabrics 00:39:24.028 rmmod nvme_keyring 00:39:24.028 14:00:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:24.028 14:00:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:39:24.028 14:00:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:39:24.028 14:00:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 74136 ']' 00:39:24.028 14:00:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 74136 00:39:24.028 14:00:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 74136 ']' 00:39:24.028 14:00:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 74136 00:39:24.028 14:00:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:39:24.028 14:00:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:24.028 14:00:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74136 00:39:24.028 14:00:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:24.028 14:00:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:24.028 killing process with pid 74136 00:39:24.028 14:00:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74136' 00:39:24.028 14:00:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 74136 00:39:24.028 14:00:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 74136 00:39:24.289 14:00:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:24.289 14:00:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:24.289 14:00:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:24.289 14:00:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:39:24.289 14:00:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:24.289 14:00:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:39:24.289 14:00:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:39:24.289 14:00:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:24.289 14:00:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:39:24.289 14:00:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:39:24.289 14:00:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:39:24.289 14:00:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:39:24.289 14:00:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:39:24.289 14:00:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:39:24.289 14:00:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:39:24.289 14:00:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:39:24.289 14:00:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:39:24.289 14:00:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:39:24.549 14:00:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:39:24.550 14:00:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:39:24.550 14:00:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:39:24.550 14:00:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:39:24.550 14:00:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:39:24.550 14:00:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:24.550 14:00:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:24.550 14:00:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:24.550 14:00:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:39:24.550 00:39:24.550 real 0m3.541s 00:39:24.550 user 0m9.015s 00:39:24.550 sys 0m0.953s 00:39:24.550 14:00:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:24.550 14:00:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:39:24.550 ************************************ 00:39:24.550 END TEST nvmf_identify 00:39:24.550 ************************************ 00:39:24.550 14:00:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:39:24.550 14:00:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:39:24.550 14:00:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:24.550 14:00:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:39:24.550 ************************************ 00:39:24.550 START TEST nvmf_perf 00:39:24.550 ************************************ 00:39:24.550 14:00:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:39:24.811 * Looking for test storage... 00:39:24.811 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:39:24.811 14:00:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:24.811 14:00:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:39:24.811 14:00:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:24.811 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:24.811 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:24.811 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:24.811 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:24.811 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:39:24.811 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:39:24.811 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:39:24.811 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:39:24.811 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:39:24.811 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:39:24.811 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:39:24.811 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:24.811 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:39:24.811 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:39:24.811 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:24.811 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:24.811 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:39:24.811 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:39:24.811 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:24.811 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:39:24.811 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:39:24.811 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:39:24.811 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:39:24.811 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:24.811 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:39:24.811 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:39:24.811 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:24.811 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:24.811 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:39:24.811 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:24.811 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:24.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:24.811 --rc genhtml_branch_coverage=1 00:39:24.811 --rc genhtml_function_coverage=1 00:39:24.811 --rc genhtml_legend=1 00:39:24.811 --rc geninfo_all_blocks=1 00:39:24.811 --rc geninfo_unexecuted_blocks=1 00:39:24.811 00:39:24.811 ' 00:39:24.811 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:24.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:24.811 --rc genhtml_branch_coverage=1 00:39:24.811 --rc genhtml_function_coverage=1 00:39:24.811 --rc genhtml_legend=1 00:39:24.811 --rc geninfo_all_blocks=1 00:39:24.811 --rc geninfo_unexecuted_blocks=1 00:39:24.811 00:39:24.811 ' 00:39:24.811 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:24.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:24.811 --rc genhtml_branch_coverage=1 00:39:24.811 --rc genhtml_function_coverage=1 00:39:24.811 --rc genhtml_legend=1 00:39:24.811 --rc geninfo_all_blocks=1 00:39:24.811 --rc geninfo_unexecuted_blocks=1 00:39:24.811 00:39:24.811 ' 00:39:24.811 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:24.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:24.811 --rc genhtml_branch_coverage=1 00:39:24.811 --rc genhtml_function_coverage=1 00:39:24.811 --rc genhtml_legend=1 00:39:24.811 --rc geninfo_all_blocks=1 00:39:24.811 --rc geninfo_unexecuted_blocks=1 00:39:24.811 00:39:24.811 ' 00:39:24.811 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:39:24.811 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:39:24.811 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:24.811 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:24.811 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:24.811 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:24.811 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:24.811 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:24.811 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:24.811 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:24.811 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:24.811 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:24.811 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:39:24.811 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=105ec898-1662-46bd-85be-b241e399edb9 00:39:24.811 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:24.811 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:24.811 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:39:24.811 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:24.811 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:24.811 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:39:24.811 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:24.811 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:24.811 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:24.811 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:24.811 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:24.811 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:24.811 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:39:24.811 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:24.811 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:39:24.811 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:24.811 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:24.811 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:24.812 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:24.812 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:24.812 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:24.812 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:24.812 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:24.812 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:24.812 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:24.812 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:39:24.812 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:39:24.812 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:24.812 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:39:24.812 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:24.812 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:24.812 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:24.812 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:24.812 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:24.812 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:24.812 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:24.812 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:25.072 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:39:25.072 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:39:25.072 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:39:25.072 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:39:25.072 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:39:25.072 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:39:25.072 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:25.072 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:39:25.072 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:39:25.072 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:39:25.072 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:25.072 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:39:25.072 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:39:25.072 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:39:25.072 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:39:25.072 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:39:25.072 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:39:25.072 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:25.072 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:39:25.072 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:39:25.072 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:39:25.072 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:39:25.072 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:39:25.072 Cannot find device "nvmf_init_br" 00:39:25.072 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:39:25.072 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:39:25.072 Cannot find device "nvmf_init_br2" 00:39:25.072 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:39:25.072 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:39:25.072 Cannot find device "nvmf_tgt_br" 00:39:25.072 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:39:25.072 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:39:25.072 Cannot find device "nvmf_tgt_br2" 00:39:25.072 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:39:25.072 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:39:25.072 Cannot find device "nvmf_init_br" 00:39:25.072 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:39:25.072 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:39:25.072 Cannot find device "nvmf_init_br2" 00:39:25.072 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:39:25.072 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:39:25.072 Cannot find device "nvmf_tgt_br" 00:39:25.072 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:39:25.072 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:39:25.072 Cannot find device "nvmf_tgt_br2" 00:39:25.072 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:39:25.072 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:39:25.072 Cannot find device "nvmf_br" 00:39:25.072 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:39:25.072 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:39:25.072 Cannot find device "nvmf_init_if" 00:39:25.072 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:39:25.072 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:39:25.072 Cannot find device "nvmf_init_if2" 00:39:25.072 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:39:25.072 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:39:25.072 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:39:25.072 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:39:25.072 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:39:25.072 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:39:25.072 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:39:25.072 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:39:25.072 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:39:25.072 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:39:25.072 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:39:25.072 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:39:25.333 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:39:25.333 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:39:25.333 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:39:25.333 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:39:25.333 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:39:25.333 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:39:25.333 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:39:25.333 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:39:25.333 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:39:25.333 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:39:25.333 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:39:25.333 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:39:25.333 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:39:25.333 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:39:25.333 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:39:25.333 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:39:25.333 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:39:25.333 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:39:25.333 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:39:25.333 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:39:25.333 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:39:25.333 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:39:25.333 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:39:25.333 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:39:25.333 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:39:25.333 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:39:25.333 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:39:25.333 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:39:25.333 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:39:25.333 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:39:25.333 00:39:25.333 --- 10.0.0.3 ping statistics --- 00:39:25.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:25.333 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:39:25.333 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:39:25.333 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:39:25.333 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:39:25.333 00:39:25.333 --- 10.0.0.4 ping statistics --- 00:39:25.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:25.333 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:39:25.333 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:39:25.333 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:25.333 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:39:25.333 00:39:25.333 --- 10.0.0.1 ping statistics --- 00:39:25.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:25.333 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:39:25.333 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:39:25.333 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:25.333 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.043 ms 00:39:25.333 00:39:25.333 --- 10.0.0.2 ping statistics --- 00:39:25.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:25.333 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:39:25.333 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:25.333 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:39:25.333 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:25.333 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:25.333 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:25.333 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:25.333 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:25.333 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:25.334 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:25.334 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:39:25.334 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:25.334 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:25.334 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:39:25.334 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=74402 00:39:25.334 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:39:25.334 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 74402 00:39:25.334 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 74402 ']' 00:39:25.334 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:25.334 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:25.334 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:25.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:25.334 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:25.334 14:00:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:39:25.334 [2024-11-20 14:00:22.647655] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:39:25.334 [2024-11-20 14:00:22.647731] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:25.594 [2024-11-20 14:00:22.801048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:25.594 [2024-11-20 14:00:22.860471] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:25.594 [2024-11-20 14:00:22.860524] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:25.594 [2024-11-20 14:00:22.860531] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:25.594 [2024-11-20 14:00:22.860536] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:25.594 [2024-11-20 14:00:22.860540] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:25.594 [2024-11-20 14:00:22.861505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:25.594 [2024-11-20 14:00:22.861623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:25.594 [2024-11-20 14:00:22.861792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:25.594 [2024-11-20 14:00:22.861797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:25.853 [2024-11-20 14:00:22.935709] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:39:26.422 14:00:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:26.422 14:00:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:39:26.422 14:00:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:26.422 14:00:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:26.422 14:00:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:39:26.422 14:00:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:26.422 14:00:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:39:26.422 14:00:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:39:26.681 14:00:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:39:26.681 14:00:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:39:26.941 14:00:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:39:26.941 14:00:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:27.200 14:00:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:39:27.200 14:00:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:39:27.200 14:00:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:39:27.200 14:00:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:39:27.200 14:00:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:39:27.200 [2024-11-20 14:00:24.513645] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:27.460 14:00:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:27.460 14:00:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:39:27.460 14:00:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:27.719 14:00:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:39:27.719 14:00:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:39:27.979 14:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:39:27.979 [2024-11-20 14:00:25.257607] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:39:27.979 14:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:39:28.238 14:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:39:28.238 14:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:39:28.238 14:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:39:28.238 14:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:39:29.617 Initializing NVMe Controllers 00:39:29.617 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:39:29.617 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:39:29.617 Initialization complete. Launching workers. 00:39:29.617 ======================================================== 00:39:29.617 Latency(us) 00:39:29.617 Device Information : IOPS MiB/s Average min max 00:39:29.617 PCIE (0000:00:10.0) NSID 1 from core 0: 19936.22 77.88 1604.77 325.37 7710.71 00:39:29.617 ======================================================== 00:39:29.617 Total : 19936.22 77.88 1604.77 325.37 7710.71 00:39:29.617 00:39:29.617 14:00:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:39:30.556 Initializing NVMe Controllers 00:39:30.556 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:39:30.556 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:39:30.556 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:39:30.556 Initialization complete. Launching workers. 00:39:30.556 ======================================================== 00:39:30.556 Latency(us) 00:39:30.556 Device Information : IOPS MiB/s Average min max 00:39:30.556 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2521.00 9.85 396.47 108.86 7145.07 00:39:30.556 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 125.00 0.49 8046.97 5036.49 12029.62 00:39:30.556 ======================================================== 00:39:30.556 Total : 2646.00 10.34 757.89 108.86 12029.62 00:39:30.556 00:39:30.815 14:00:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:39:32.196 Initializing NVMe Controllers 00:39:32.196 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:39:32.196 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:39:32.196 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:39:32.196 Initialization complete. Launching workers. 00:39:32.196 ======================================================== 00:39:32.196 Latency(us) 00:39:32.196 Device Information : IOPS MiB/s Average min max 00:39:32.196 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10623.00 41.50 3014.06 499.52 7877.66 00:39:32.196 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3922.00 15.32 8214.55 6599.74 16693.89 00:39:32.196 ======================================================== 00:39:32.196 Total : 14545.00 56.82 4416.35 499.52 16693.89 00:39:32.196 00:39:32.196 14:00:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:39:32.196 14:00:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:39:34.762 Initializing NVMe Controllers 00:39:34.762 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:39:34.762 Controller IO queue size 128, less than required. 00:39:34.762 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:34.762 Controller IO queue size 128, less than required. 00:39:34.762 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:34.762 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:39:34.762 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:39:34.762 Initialization complete. Launching workers. 00:39:34.762 ======================================================== 00:39:34.762 Latency(us) 00:39:34.762 Device Information : IOPS MiB/s Average min max 00:39:34.762 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1843.50 460.87 70390.35 37085.44 124517.17 00:39:34.762 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 671.50 167.87 196137.27 30261.28 326008.06 00:39:34.762 ======================================================== 00:39:34.762 Total : 2515.00 628.75 103964.53 30261.28 326008.06 00:39:34.762 00:39:34.762 14:00:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:39:34.762 Initializing NVMe Controllers 00:39:34.762 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:39:34.762 Controller IO queue size 128, less than required. 00:39:34.762 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:34.762 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:39:34.762 Controller IO queue size 128, less than required. 00:39:34.762 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:34.762 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:39:34.762 WARNING: Some requested NVMe devices were skipped 00:39:34.762 No valid NVMe controllers or AIO or URING devices found 00:39:34.762 14:00:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:39:37.300 Initializing NVMe Controllers 00:39:37.300 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:39:37.300 Controller IO queue size 128, less than required. 00:39:37.300 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:37.300 Controller IO queue size 128, less than required. 00:39:37.300 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:37.300 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:39:37.300 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:39:37.300 Initialization complete. Launching workers. 00:39:37.300 00:39:37.300 ==================== 00:39:37.300 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:39:37.300 TCP transport: 00:39:37.300 polls: 24844 00:39:37.300 idle_polls: 19663 00:39:37.300 sock_completions: 5181 00:39:37.300 nvme_completions: 6833 00:39:37.300 submitted_requests: 10238 00:39:37.300 queued_requests: 1 00:39:37.300 00:39:37.300 ==================== 00:39:37.300 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:39:37.300 TCP transport: 00:39:37.300 polls: 24964 00:39:37.300 idle_polls: 18627 00:39:37.300 sock_completions: 6337 00:39:37.300 nvme_completions: 6177 00:39:37.300 submitted_requests: 9176 00:39:37.300 queued_requests: 1 00:39:37.300 ======================================================== 00:39:37.300 Latency(us) 00:39:37.300 Device Information : IOPS MiB/s Average min max 00:39:37.300 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1706.22 426.55 76727.01 42735.59 132427.59 00:39:37.300 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1542.39 385.60 83155.61 26657.34 131078.65 00:39:37.300 ======================================================== 00:39:37.300 Total : 3248.60 812.15 79779.21 26657.34 132427.59 00:39:37.300 00:39:37.300 14:00:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:39:37.300 14:00:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:37.560 14:00:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:39:37.560 14:00:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:39:37.560 14:00:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:39:37.560 14:00:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:37.560 14:00:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:39:37.560 14:00:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:37.560 14:00:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:39:37.560 14:00:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:37.560 14:00:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:37.560 rmmod nvme_tcp 00:39:37.560 rmmod nvme_fabrics 00:39:37.560 rmmod nvme_keyring 00:39:37.560 14:00:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:37.560 14:00:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:39:37.560 14:00:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:39:37.819 14:00:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 74402 ']' 00:39:37.819 14:00:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 74402 00:39:37.819 14:00:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 74402 ']' 00:39:37.819 14:00:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 74402 00:39:37.819 14:00:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:39:37.819 14:00:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:37.819 14:00:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74402 00:39:37.819 killing process with pid 74402 00:39:37.819 14:00:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:37.819 14:00:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:37.819 14:00:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74402' 00:39:37.819 14:00:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 74402 00:39:37.819 14:00:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 74402 00:39:39.728 14:00:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:39.728 14:00:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:39.728 14:00:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:39.728 14:00:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:39:39.728 14:00:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:39:39.728 14:00:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:39.728 14:00:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:39:39.728 14:00:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:39.728 14:00:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:39:39.728 14:00:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:39:39.728 14:00:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:39:39.728 14:00:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:39:39.728 14:00:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:39:39.728 14:00:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:39:39.728 14:00:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:39:39.728 14:00:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:39:39.728 14:00:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:39:39.728 14:00:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:39:39.728 14:00:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:39:39.728 14:00:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:39:39.728 14:00:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:39:39.728 14:00:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:39:39.728 14:00:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:39:39.728 14:00:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:39.728 14:00:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:39.728 14:00:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:39.728 14:00:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:39:39.728 00:39:39.728 real 0m15.049s 00:39:39.728 user 0m53.873s 00:39:39.728 sys 0m4.008s 00:39:39.728 14:00:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:39.728 14:00:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:39:39.728 ************************************ 00:39:39.728 END TEST nvmf_perf 00:39:39.728 ************************************ 00:39:39.728 14:00:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:39:39.728 14:00:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:39:39.728 14:00:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:39.728 14:00:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:39:39.728 ************************************ 00:39:39.728 START TEST nvmf_fio_host 00:39:39.728 ************************************ 00:39:39.728 14:00:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:39:39.989 * Looking for test storage... 00:39:39.989 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:39:39.989 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:39.989 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:39.989 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:39:39.989 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:39.989 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:39.989 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:39.989 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:39.989 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:39:39.989 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:39:39.989 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:39:39.989 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:39:39.989 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:39:39.989 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:39:39.989 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:39:39.989 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:39.989 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:39:39.989 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:39:39.989 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:39.989 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:39.989 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:39:39.989 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:39:39.989 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:39.989 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:39:39.989 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:39:39.989 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:39:39.989 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:39:39.989 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:39.989 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:39:39.989 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:39:39.989 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:39.989 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:39.989 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:39:39.989 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:39.989 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:39.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:39.989 --rc genhtml_branch_coverage=1 00:39:39.989 --rc genhtml_function_coverage=1 00:39:39.989 --rc genhtml_legend=1 00:39:39.989 --rc geninfo_all_blocks=1 00:39:39.989 --rc geninfo_unexecuted_blocks=1 00:39:39.989 00:39:39.989 ' 00:39:39.989 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:39.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:39.989 --rc genhtml_branch_coverage=1 00:39:39.989 --rc genhtml_function_coverage=1 00:39:39.989 --rc genhtml_legend=1 00:39:39.989 --rc geninfo_all_blocks=1 00:39:39.989 --rc geninfo_unexecuted_blocks=1 00:39:39.989 00:39:39.989 ' 00:39:39.989 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:39.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:39.989 --rc genhtml_branch_coverage=1 00:39:39.989 --rc genhtml_function_coverage=1 00:39:39.989 --rc genhtml_legend=1 00:39:39.989 --rc geninfo_all_blocks=1 00:39:39.989 --rc geninfo_unexecuted_blocks=1 00:39:39.989 00:39:39.989 ' 00:39:39.989 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:39.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:39.989 --rc genhtml_branch_coverage=1 00:39:39.989 --rc genhtml_function_coverage=1 00:39:39.989 --rc genhtml_legend=1 00:39:39.989 --rc geninfo_all_blocks=1 00:39:39.989 --rc geninfo_unexecuted_blocks=1 00:39:39.989 00:39:39.989 ' 00:39:39.989 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:39.989 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:39:39.989 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:39.989 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:39.989 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:39.990 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:39.990 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:39.990 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:39.990 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:39:39.990 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:39.990 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:39:39.990 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:39:39.990 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:39.990 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:39.990 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:39.990 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:39.990 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:39.990 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:39.990 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:39.990 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:39.990 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:39.990 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:39.990 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:39:39.990 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=105ec898-1662-46bd-85be-b241e399edb9 00:39:39.990 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:39.990 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:39.990 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:39:39.990 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:39.990 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:39.990 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:39:39.990 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:39.990 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:39.990 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:39.990 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:39.990 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:39.990 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:39.990 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:39:39.990 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:39.990 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:39:39.990 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:39.990 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:39.990 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:39.990 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:39.990 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:39.990 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:39.990 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:39.990 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:39.990 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:39.990 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:39.990 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:39.990 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:39:39.990 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:39.990 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:39.990 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:39.990 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:39.990 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:39.990 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:39.990 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:39.990 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:39.990 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:39:39.990 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:39:39.990 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:39:39.990 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:39:39.990 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:39:39.990 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:39:39.990 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:39.990 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:39:39.990 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:39:39.990 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:39:39.990 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:39.990 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:39:39.991 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:39:39.991 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:39:39.991 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:39:39.991 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:39:39.991 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:39:39.991 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:39.991 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:39:39.991 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:39:39.991 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:39:39.991 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:39:39.991 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:39:39.991 Cannot find device "nvmf_init_br" 00:39:39.991 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:39:39.991 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:39:39.991 Cannot find device "nvmf_init_br2" 00:39:40.250 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:39:40.250 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:39:40.250 Cannot find device "nvmf_tgt_br" 00:39:40.250 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:39:40.250 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:39:40.250 Cannot find device "nvmf_tgt_br2" 00:39:40.250 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:39:40.250 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:39:40.250 Cannot find device "nvmf_init_br" 00:39:40.250 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:39:40.250 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:39:40.250 Cannot find device "nvmf_init_br2" 00:39:40.250 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:39:40.250 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:39:40.250 Cannot find device "nvmf_tgt_br" 00:39:40.250 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:39:40.250 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:39:40.250 Cannot find device "nvmf_tgt_br2" 00:39:40.250 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:39:40.250 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:39:40.250 Cannot find device "nvmf_br" 00:39:40.250 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:39:40.250 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:39:40.250 Cannot find device "nvmf_init_if" 00:39:40.250 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:39:40.250 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:39:40.250 Cannot find device "nvmf_init_if2" 00:39:40.250 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:39:40.250 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:39:40.250 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:39:40.250 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:39:40.250 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:39:40.250 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:39:40.250 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:39:40.250 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:39:40.250 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:39:40.250 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:39:40.250 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:39:40.250 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:39:40.250 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:39:40.250 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:39:40.510 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:39:40.510 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:39:40.510 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:39:40.510 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:39:40.510 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:39:40.510 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:39:40.510 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:39:40.510 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:39:40.510 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:39:40.510 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:39:40.510 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:39:40.510 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:39:40.510 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:39:40.510 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:39:40.510 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:39:40.510 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:39:40.510 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:39:40.510 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:39:40.510 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:39:40.510 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:39:40.510 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:39:40.510 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:39:40.510 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:39:40.510 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:39:40.510 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:39:40.510 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:39:40.510 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:39:40.510 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.116 ms 00:39:40.510 00:39:40.510 --- 10.0.0.3 ping statistics --- 00:39:40.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:40.510 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:39:40.510 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:39:40.510 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:39:40.510 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.125 ms 00:39:40.510 00:39:40.510 --- 10.0.0.4 ping statistics --- 00:39:40.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:40.510 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:39:40.510 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:39:40.510 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:40.510 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:39:40.510 00:39:40.510 --- 10.0.0.1 ping statistics --- 00:39:40.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:40.510 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:39:40.510 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:39:40.510 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:40.510 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:39:40.510 00:39:40.510 --- 10.0.0.2 ping statistics --- 00:39:40.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:40.510 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:39:40.510 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:40.510 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:39:40.510 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:40.510 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:40.510 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:40.510 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:40.510 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:40.510 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:40.510 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:40.510 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:39:40.510 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:39:40.510 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:40.510 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:39:40.510 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=74869 00:39:40.510 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:39:40.510 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:40.510 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 74869 00:39:40.510 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 74869 ']' 00:39:40.510 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:40.510 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:40.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:40.510 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:40.510 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:40.510 14:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:39:40.770 [2024-11-20 14:00:37.849287] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:39:40.770 [2024-11-20 14:00:37.849349] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:40.770 [2024-11-20 14:00:38.001616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:40.770 [2024-11-20 14:00:38.069075] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:40.770 [2024-11-20 14:00:38.069118] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:40.770 [2024-11-20 14:00:38.069125] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:40.770 [2024-11-20 14:00:38.069131] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:40.770 [2024-11-20 14:00:38.069135] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:40.770 [2024-11-20 14:00:38.070106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:40.770 [2024-11-20 14:00:38.070205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:40.770 [2024-11-20 14:00:38.070250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:40.770 [2024-11-20 14:00:38.070255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:41.028 [2024-11-20 14:00:38.122584] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:39:41.599 14:00:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:41.599 14:00:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:39:41.599 14:00:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:41.859 [2024-11-20 14:00:38.984898] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:41.859 14:00:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:39:41.859 14:00:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:41.859 14:00:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:39:41.859 14:00:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:39:42.117 Malloc1 00:39:42.117 14:00:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:42.376 14:00:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:39:42.635 14:00:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:39:42.895 [2024-11-20 14:00:40.007285] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:39:42.895 14:00:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:39:43.154 14:00:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:39:43.154 14:00:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:39:43.155 14:00:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:39:43.155 14:00:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:39:43.155 14:00:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:43.155 14:00:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:39:43.155 14:00:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:39:43.155 14:00:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:39:43.155 14:00:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:39:43.155 14:00:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:43.155 14:00:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:39:43.155 14:00:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:39:43.155 14:00:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:43.155 14:00:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:39:43.155 14:00:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:39:43.155 14:00:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:43.155 14:00:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:39:43.155 14:00:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:43.155 14:00:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:39:43.155 14:00:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:39:43.155 14:00:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:39:43.155 14:00:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:39:43.155 14:00:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:39:43.155 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:39:43.155 fio-3.35 00:39:43.155 Starting 1 thread 00:39:45.739 00:39:45.739 test: (groupid=0, jobs=1): err= 0: pid=74952: Wed Nov 20 14:00:42 2024 00:39:45.739 read: IOPS=8734, BW=34.1MiB/s (35.8MB/s)(68.5MiB/2007msec) 00:39:45.739 slat (nsec): min=1600, max=426903, avg=2079.63, stdev=4192.40 00:39:45.739 clat (usec): min=3601, max=13369, avg=7664.72, stdev=575.08 00:39:45.739 lat (usec): min=3654, max=13371, avg=7666.80, stdev=574.95 00:39:45.739 clat percentiles (usec): 00:39:45.739 | 1.00th=[ 6456], 5.00th=[ 6915], 10.00th=[ 7046], 20.00th=[ 7242], 00:39:45.739 | 30.00th=[ 7439], 40.00th=[ 7504], 50.00th=[ 7635], 60.00th=[ 7767], 00:39:45.739 | 70.00th=[ 7898], 80.00th=[ 8029], 90.00th=[ 8291], 95.00th=[ 8455], 00:39:45.739 | 99.00th=[ 8979], 99.50th=[10814], 99.90th=[12387], 99.95th=[12780], 00:39:45.739 | 99.99th=[13304] 00:39:45.739 bw ( KiB/s): min=34203, max=35344, per=99.94%, avg=34916.75, stdev=494.32, samples=4 00:39:45.739 iops : min= 8550, max= 8836, avg=8729.00, stdev=123.94, samples=4 00:39:45.739 write: IOPS=8731, BW=34.1MiB/s (35.8MB/s)(68.5MiB/2007msec); 0 zone resets 00:39:45.739 slat (nsec): min=1633, max=319254, avg=2108.80, stdev=2766.52 00:39:45.739 clat (usec): min=3410, max=13154, avg=6935.68, stdev=519.42 00:39:45.739 lat (usec): min=3429, max=13156, avg=6937.79, stdev=519.43 00:39:45.739 clat percentiles (usec): 00:39:45.739 | 1.00th=[ 5866], 5.00th=[ 6194], 10.00th=[ 6390], 20.00th=[ 6587], 00:39:45.739 | 30.00th=[ 6718], 40.00th=[ 6849], 50.00th=[ 6915], 60.00th=[ 7046], 00:39:45.739 | 70.00th=[ 7111], 80.00th=[ 7242], 90.00th=[ 7439], 95.00th=[ 7635], 00:39:45.739 | 99.00th=[ 8029], 99.50th=[ 9241], 99.90th=[11600], 99.95th=[12518], 00:39:45.739 | 99.99th=[12649] 00:39:45.739 bw ( KiB/s): min=34240, max=35584, per=99.93%, avg=34904.25, stdev=556.39, samples=4 00:39:45.739 iops : min= 8560, max= 8896, avg=8726.00, stdev=139.08, samples=4 00:39:45.739 lat (msec) : 4=0.03%, 10=99.42%, 20=0.56% 00:39:45.739 cpu : usr=74.58%, sys=20.94%, ctx=12, majf=0, minf=7 00:39:45.739 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:39:45.739 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:45.739 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:45.739 issued rwts: total=17530,17525,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:45.739 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:45.739 00:39:45.739 Run status group 0 (all jobs): 00:39:45.739 READ: bw=34.1MiB/s (35.8MB/s), 34.1MiB/s-34.1MiB/s (35.8MB/s-35.8MB/s), io=68.5MiB (71.8MB), run=2007-2007msec 00:39:45.739 WRITE: bw=34.1MiB/s (35.8MB/s), 34.1MiB/s-34.1MiB/s (35.8MB/s-35.8MB/s), io=68.5MiB (71.8MB), run=2007-2007msec 00:39:45.739 14:00:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:39:45.739 14:00:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:39:45.739 14:00:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:39:45.739 14:00:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:45.739 14:00:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:39:45.739 14:00:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:39:45.739 14:00:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:39:45.739 14:00:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:39:45.739 14:00:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:45.739 14:00:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:39:45.739 14:00:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:39:45.739 14:00:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:45.739 14:00:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:39:45.739 14:00:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:39:45.739 14:00:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:45.739 14:00:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:39:45.739 14:00:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:39:45.739 14:00:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:45.739 14:00:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:39:45.739 14:00:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:39:45.739 14:00:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:39:45.739 14:00:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:39:45.739 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:39:45.739 fio-3.35 00:39:45.739 Starting 1 thread 00:39:48.279 00:39:48.279 test: (groupid=0, jobs=1): err= 0: pid=74995: Wed Nov 20 14:00:45 2024 00:39:48.279 read: IOPS=8396, BW=131MiB/s (138MB/s)(264MiB/2009msec) 00:39:48.279 slat (usec): min=2, max=106, avg= 3.24, stdev= 1.90 00:39:48.279 clat (usec): min=2135, max=19126, avg=8858.91, stdev=2603.08 00:39:48.279 lat (usec): min=2138, max=19129, avg=8862.15, stdev=2603.16 00:39:48.279 clat percentiles (usec): 00:39:48.279 | 1.00th=[ 3720], 5.00th=[ 4621], 10.00th=[ 5342], 20.00th=[ 6456], 00:39:48.279 | 30.00th=[ 7373], 40.00th=[ 8094], 50.00th=[ 8979], 60.00th=[ 9634], 00:39:48.279 | 70.00th=[10159], 80.00th=[10945], 90.00th=[12387], 95.00th=[13304], 00:39:48.279 | 99.00th=[14877], 99.50th=[15533], 99.90th=[16712], 99.95th=[17171], 00:39:48.279 | 99.99th=[18744] 00:39:48.279 bw ( KiB/s): min=63232, max=71360, per=50.23%, avg=67488.00, stdev=3999.02, samples=4 00:39:48.279 iops : min= 3952, max= 4460, avg=4218.00, stdev=249.94, samples=4 00:39:48.279 write: IOPS=4794, BW=74.9MiB/s (78.6MB/s)(138MiB/1844msec); 0 zone resets 00:39:48.279 slat (usec): min=27, max=498, avg=36.03, stdev=11.18 00:39:48.279 clat (usec): min=6064, max=21158, avg=11552.49, stdev=2395.98 00:39:48.279 lat (usec): min=6109, max=21191, avg=11588.51, stdev=2398.13 00:39:48.279 clat percentiles (usec): 00:39:48.279 | 1.00th=[ 7439], 5.00th=[ 8160], 10.00th=[ 8717], 20.00th=[ 9503], 00:39:48.279 | 30.00th=[10159], 40.00th=[10683], 50.00th=[11207], 60.00th=[11731], 00:39:48.279 | 70.00th=[12518], 80.00th=[13304], 90.00th=[14877], 95.00th=[16188], 00:39:48.279 | 99.00th=[18220], 99.50th=[19006], 99.90th=[20579], 99.95th=[20579], 00:39:48.279 | 99.99th=[21103] 00:39:48.279 bw ( KiB/s): min=64544, max=75520, per=91.39%, avg=70104.00, stdev=5324.09, samples=4 00:39:48.279 iops : min= 4034, max= 4720, avg=4381.50, stdev=332.76, samples=4 00:39:48.279 lat (msec) : 4=1.28%, 10=52.22%, 20=46.43%, 50=0.08% 00:39:48.279 cpu : usr=82.82%, sys=13.99%, ctx=5, majf=0, minf=8 00:39:48.279 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:39:48.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:48.279 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:48.279 issued rwts: total=16869,8841,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:48.279 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:48.279 00:39:48.279 Run status group 0 (all jobs): 00:39:48.279 READ: bw=131MiB/s (138MB/s), 131MiB/s-131MiB/s (138MB/s-138MB/s), io=264MiB (276MB), run=2009-2009msec 00:39:48.279 WRITE: bw=74.9MiB/s (78.6MB/s), 74.9MiB/s-74.9MiB/s (78.6MB/s-78.6MB/s), io=138MiB (145MB), run=1844-1844msec 00:39:48.279 14:00:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:48.279 14:00:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:39:48.279 14:00:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:39:48.279 14:00:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:39:48.279 14:00:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:39:48.279 14:00:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:48.279 14:00:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:39:48.279 14:00:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:48.279 14:00:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:39:48.279 14:00:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:48.279 14:00:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:48.279 rmmod nvme_tcp 00:39:48.538 rmmod nvme_fabrics 00:39:48.538 rmmod nvme_keyring 00:39:48.538 14:00:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:48.538 14:00:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:39:48.538 14:00:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:39:48.538 14:00:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 74869 ']' 00:39:48.538 14:00:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 74869 00:39:48.538 14:00:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 74869 ']' 00:39:48.538 14:00:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 74869 00:39:48.538 14:00:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:39:48.538 14:00:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:48.538 14:00:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74869 00:39:48.538 14:00:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:48.538 14:00:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:48.538 14:00:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74869' 00:39:48.538 killing process with pid 74869 00:39:48.538 14:00:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 74869 00:39:48.538 14:00:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 74869 00:39:48.798 14:00:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:48.798 14:00:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:48.798 14:00:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:48.798 14:00:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:39:48.798 14:00:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:39:48.798 14:00:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:48.798 14:00:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:39:48.798 14:00:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:48.798 14:00:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:39:48.798 14:00:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:39:48.798 14:00:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:39:48.798 14:00:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:39:48.798 14:00:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:39:48.798 14:00:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:39:48.798 14:00:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:39:48.798 14:00:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:39:48.798 14:00:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:39:48.798 14:00:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:39:48.798 14:00:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:39:48.798 14:00:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:39:49.058 14:00:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:39:49.058 14:00:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:39:49.058 14:00:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:39:49.058 14:00:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:49.058 14:00:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:49.058 14:00:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:49.058 14:00:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:39:49.058 00:39:49.058 real 0m9.240s 00:39:49.058 user 0m36.133s 00:39:49.058 sys 0m2.387s 00:39:49.058 14:00:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:49.058 14:00:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:39:49.058 ************************************ 00:39:49.058 END TEST nvmf_fio_host 00:39:49.058 ************************************ 00:39:49.058 14:00:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:39:49.058 14:00:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:39:49.058 14:00:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:49.058 14:00:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:39:49.058 ************************************ 00:39:49.058 START TEST nvmf_failover 00:39:49.058 ************************************ 00:39:49.058 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:39:49.319 * Looking for test storage... 00:39:49.319 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:39:49.319 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:49.319 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:39:49.319 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:49.319 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:49.319 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:49.319 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:49.319 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:49.319 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:39:49.319 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:39:49.319 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:39:49.319 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:39:49.319 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:49.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:49.320 --rc genhtml_branch_coverage=1 00:39:49.320 --rc genhtml_function_coverage=1 00:39:49.320 --rc genhtml_legend=1 00:39:49.320 --rc geninfo_all_blocks=1 00:39:49.320 --rc geninfo_unexecuted_blocks=1 00:39:49.320 00:39:49.320 ' 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:49.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:49.320 --rc genhtml_branch_coverage=1 00:39:49.320 --rc genhtml_function_coverage=1 00:39:49.320 --rc genhtml_legend=1 00:39:49.320 --rc geninfo_all_blocks=1 00:39:49.320 --rc geninfo_unexecuted_blocks=1 00:39:49.320 00:39:49.320 ' 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:49.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:49.320 --rc genhtml_branch_coverage=1 00:39:49.320 --rc genhtml_function_coverage=1 00:39:49.320 --rc genhtml_legend=1 00:39:49.320 --rc geninfo_all_blocks=1 00:39:49.320 --rc geninfo_unexecuted_blocks=1 00:39:49.320 00:39:49.320 ' 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:49.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:49.320 --rc genhtml_branch_coverage=1 00:39:49.320 --rc genhtml_function_coverage=1 00:39:49.320 --rc genhtml_legend=1 00:39:49.320 --rc geninfo_all_blocks=1 00:39:49.320 --rc geninfo_unexecuted_blocks=1 00:39:49.320 00:39:49.320 ' 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=105ec898-1662-46bd-85be-b241e399edb9 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:49.320 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:39:49.320 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:39:49.321 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:49.321 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:39:49.321 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:39:49.321 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:39:49.321 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:49.321 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:39:49.321 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:39:49.321 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:39:49.321 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:39:49.321 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:39:49.321 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:39:49.321 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:49.321 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:39:49.321 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:39:49.321 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:39:49.321 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:39:49.321 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:39:49.321 Cannot find device "nvmf_init_br" 00:39:49.321 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:39:49.321 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:39:49.321 Cannot find device "nvmf_init_br2" 00:39:49.321 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:39:49.321 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:39:49.321 Cannot find device "nvmf_tgt_br" 00:39:49.321 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:39:49.321 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:39:49.321 Cannot find device "nvmf_tgt_br2" 00:39:49.321 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:39:49.321 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:39:49.581 Cannot find device "nvmf_init_br" 00:39:49.581 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:39:49.581 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:39:49.581 Cannot find device "nvmf_init_br2" 00:39:49.581 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:39:49.581 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:39:49.581 Cannot find device "nvmf_tgt_br" 00:39:49.581 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:39:49.581 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:39:49.581 Cannot find device "nvmf_tgt_br2" 00:39:49.581 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:39:49.581 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:39:49.581 Cannot find device "nvmf_br" 00:39:49.581 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:39:49.581 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:39:49.581 Cannot find device "nvmf_init_if" 00:39:49.581 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:39:49.581 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:39:49.581 Cannot find device "nvmf_init_if2" 00:39:49.581 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:39:49.581 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:39:49.581 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:39:49.581 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:39:49.581 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:39:49.581 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:39:49.581 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:39:49.581 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:39:49.581 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:39:49.581 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:39:49.581 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:39:49.581 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:39:49.581 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:39:49.581 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:39:49.581 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:39:49.581 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:39:49.581 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:39:49.581 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:39:49.581 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:39:49.581 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:39:49.581 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:39:49.581 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:39:49.581 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:39:49.581 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:39:49.581 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:39:49.581 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:39:49.581 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:39:49.581 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:39:49.581 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:39:49.842 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:39:49.842 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:39:49.842 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:39:49.842 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:39:49.842 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:39:49.842 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:39:49.842 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:39:49.842 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:39:49.842 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:39:49.842 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:39:49.842 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:39:49.842 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:39:49.842 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:39:49.842 00:39:49.842 --- 10.0.0.3 ping statistics --- 00:39:49.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:49.842 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:39:49.842 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:39:49.842 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:39:49.842 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:39:49.842 00:39:49.842 --- 10.0.0.4 ping statistics --- 00:39:49.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:49.842 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:39:49.842 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:39:49.842 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:49.842 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:39:49.842 00:39:49.842 --- 10.0.0.1 ping statistics --- 00:39:49.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:49.842 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:39:49.842 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:39:49.842 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:49.842 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.037 ms 00:39:49.842 00:39:49.842 --- 10.0.0.2 ping statistics --- 00:39:49.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:49.842 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:39:49.842 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:49.842 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:39:49.842 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:49.842 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:49.842 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:49.842 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:49.842 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:49.842 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:49.842 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:49.842 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:39:49.842 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:49.842 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:49.842 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:39:49.842 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=75273 00:39:49.842 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 75273 00:39:49.842 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:39:49.842 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75273 ']' 00:39:49.842 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:49.842 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:49.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:49.842 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:49.842 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:49.842 14:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:39:49.842 [2024-11-20 14:00:47.047229] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:39:49.842 [2024-11-20 14:00:47.047314] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:50.102 [2024-11-20 14:00:47.201269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:50.102 [2024-11-20 14:00:47.261769] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:50.102 [2024-11-20 14:00:47.261836] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:50.102 [2024-11-20 14:00:47.261842] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:50.102 [2024-11-20 14:00:47.261847] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:50.102 [2024-11-20 14:00:47.261851] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:50.102 [2024-11-20 14:00:47.263083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:50.102 [2024-11-20 14:00:47.263229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:50.102 [2024-11-20 14:00:47.263230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:50.102 [2024-11-20 14:00:47.335611] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:39:50.671 14:00:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:50.671 14:00:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:39:50.671 14:00:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:50.671 14:00:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:50.671 14:00:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:39:50.931 14:00:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:50.931 14:00:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:50.931 [2024-11-20 14:00:48.221092] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:50.931 14:00:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:39:51.195 Malloc0 00:39:51.455 14:00:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:51.455 14:00:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:51.714 14:00:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:39:51.973 [2024-11-20 14:00:49.167117] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:39:51.973 14:00:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:39:52.232 [2024-11-20 14:00:49.382863] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:39:52.232 14:00:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:39:52.492 [2024-11-20 14:00:49.598567] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:39:52.492 14:00:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:39:52.492 14:00:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=75330 00:39:52.492 14:00:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:52.492 14:00:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 75330 /var/tmp/bdevperf.sock 00:39:52.492 14:00:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75330 ']' 00:39:52.492 14:00:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:52.492 14:00:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:52.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:52.492 14:00:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:52.492 14:00:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:52.492 14:00:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:39:52.751 14:00:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:52.751 14:00:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:39:52.751 14:00:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:39:53.010 NVMe0n1 00:39:53.270 14:00:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:39:53.529 00:39:53.529 14:00:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=75342 00:39:53.529 14:00:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:39:53.529 14:00:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:39:54.468 14:00:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:39:54.729 [2024-11-20 14:00:51.883625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.729 [2024-11-20 14:00:51.883680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.729 [2024-11-20 14:00:51.883705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.729 [2024-11-20 14:00:51.883713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.729 [2024-11-20 14:00:51.883720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.729 [2024-11-20 14:00:51.883738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.729 [2024-11-20 14:00:51.883745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.729 [2024-11-20 14:00:51.883751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.729 [2024-11-20 14:00:51.883757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.729 [2024-11-20 14:00:51.883763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.729 [2024-11-20 14:00:51.883768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.729 [2024-11-20 14:00:51.883774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.729 [2024-11-20 14:00:51.883780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.729 [2024-11-20 14:00:51.883785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.729 [2024-11-20 14:00:51.883791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.729 [2024-11-20 14:00:51.883797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.729 [2024-11-20 14:00:51.883803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.729 [2024-11-20 14:00:51.883809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.729 [2024-11-20 14:00:51.883815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.729 [2024-11-20 14:00:51.883820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.729 [2024-11-20 14:00:51.883826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.729 [2024-11-20 14:00:51.883831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.729 [2024-11-20 14:00:51.883839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.729 [2024-11-20 14:00:51.883845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.729 [2024-11-20 14:00:51.883851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.729 [2024-11-20 14:00:51.883856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.729 [2024-11-20 14:00:51.883862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.729 [2024-11-20 14:00:51.883867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.729 [2024-11-20 14:00:51.883872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.729 [2024-11-20 14:00:51.883878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.729 [2024-11-20 14:00:51.883883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.729 [2024-11-20 14:00:51.883889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.729 [2024-11-20 14:00:51.883900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.729 [2024-11-20 14:00:51.883905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.729 [2024-11-20 14:00:51.883911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.729 [2024-11-20 14:00:51.883917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.729 [2024-11-20 14:00:51.883923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.729 [2024-11-20 14:00:51.883928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.729 [2024-11-20 14:00:51.883934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.729 [2024-11-20 14:00:51.883940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.729 [2024-11-20 14:00:51.883945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.729 [2024-11-20 14:00:51.883951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.729 [2024-11-20 14:00:51.883956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.729 [2024-11-20 14:00:51.883961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.729 [2024-11-20 14:00:51.883967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.729 [2024-11-20 14:00:51.883973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.729 [2024-11-20 14:00:51.883978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.729 [2024-11-20 14:00:51.883984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.729 [2024-11-20 14:00:51.883989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.729 [2024-11-20 14:00:51.884006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.730 [2024-11-20 14:00:51.884011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.730 [2024-11-20 14:00:51.884016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.730 [2024-11-20 14:00:51.884021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.730 [2024-11-20 14:00:51.884026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.730 [2024-11-20 14:00:51.884031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.730 [2024-11-20 14:00:51.884037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.730 [2024-11-20 14:00:51.884042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.730 [2024-11-20 14:00:51.884047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.730 [2024-11-20 14:00:51.884052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.730 [2024-11-20 14:00:51.884057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.730 [2024-11-20 14:00:51.884063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.730 [2024-11-20 14:00:51.884068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.730 [2024-11-20 14:00:51.884078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.730 [2024-11-20 14:00:51.884083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.730 [2024-11-20 14:00:51.884090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.730 [2024-11-20 14:00:51.884095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.730 [2024-11-20 14:00:51.884100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.730 [2024-11-20 14:00:51.884106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.730 [2024-11-20 14:00:51.884111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.730 [2024-11-20 14:00:51.884116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.730 [2024-11-20 14:00:51.884122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.730 [2024-11-20 14:00:51.884165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.730 [2024-11-20 14:00:51.884171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.730 [2024-11-20 14:00:51.884177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.730 [2024-11-20 14:00:51.884185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.730 [2024-11-20 14:00:51.884191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.730 [2024-11-20 14:00:51.884197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.730 [2024-11-20 14:00:51.884205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.730 [2024-11-20 14:00:51.884211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.730 [2024-11-20 14:00:51.884217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.730 [2024-11-20 14:00:51.884222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.730 [2024-11-20 14:00:51.884228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.730 [2024-11-20 14:00:51.884233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.730 [2024-11-20 14:00:51.884241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.730 [2024-11-20 14:00:51.884247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.730 [2024-11-20 14:00:51.884253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.730 [2024-11-20 14:00:51.884260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.730 [2024-11-20 14:00:51.884266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.730 [2024-11-20 14:00:51.884272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.730 [2024-11-20 14:00:51.884277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.730 [2024-11-20 14:00:51.884283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.730 [2024-11-20 14:00:51.884290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.730 [2024-11-20 14:00:51.884296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.730 [2024-11-20 14:00:51.884304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.730 [2024-11-20 14:00:51.884310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.730 [2024-11-20 14:00:51.884315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.730 [2024-11-20 14:00:51.884323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.730 [2024-11-20 14:00:51.884328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.730 [2024-11-20 14:00:51.884334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.730 [2024-11-20 14:00:51.884339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.730 [2024-11-20 14:00:51.884347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.730 [2024-11-20 14:00:51.884353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.730 [2024-11-20 14:00:51.884359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.730 [2024-11-20 14:00:51.884365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.730 [2024-11-20 14:00:51.884370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.730 [2024-11-20 14:00:51.884376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.730 [2024-11-20 14:00:51.884382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.730 [2024-11-20 14:00:51.884387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.730 [2024-11-20 14:00:51.884395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.730 [2024-11-20 14:00:51.884401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.730 [2024-11-20 14:00:51.884406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.730 [2024-11-20 14:00:51.884412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.730 [2024-11-20 14:00:51.884418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.730 [2024-11-20 14:00:51.884423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.730 [2024-11-20 14:00:51.884429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.730 [2024-11-20 14:00:51.884437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1512d30 is same with the state(6) to be set 00:39:54.730 14:00:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:39:58.023 14:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:39:58.023 00:39:58.023 14:00:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:39:58.282 14:00:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:40:01.573 14:00:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:40:01.573 [2024-11-20 14:00:58.702535] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:40:01.573 14:00:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:40:02.510 14:00:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:40:02.770 14:00:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 75342 00:40:09.366 { 00:40:09.366 "results": [ 00:40:09.366 { 00:40:09.366 "job": "NVMe0n1", 00:40:09.366 "core_mask": "0x1", 00:40:09.366 "workload": "verify", 00:40:09.366 "status": "finished", 00:40:09.366 "verify_range": { 00:40:09.366 "start": 0, 00:40:09.366 "length": 16384 00:40:09.366 }, 00:40:09.366 "queue_depth": 128, 00:40:09.366 "io_size": 4096, 00:40:09.366 "runtime": 15.009336, 00:40:09.366 "iops": 9608.15321876997, 00:40:09.366 "mibps": 37.5318485108202, 00:40:09.366 "io_failed": 3861, 00:40:09.366 "io_timeout": 0, 00:40:09.366 "avg_latency_us": 12950.716871912318, 00:40:09.366 "min_latency_us": 420.33187772925766, 00:40:09.366 "max_latency_us": 25870.979912663755 00:40:09.366 } 00:40:09.366 ], 00:40:09.366 "core_count": 1 00:40:09.366 } 00:40:09.366 14:01:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 75330 00:40:09.366 14:01:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75330 ']' 00:40:09.366 14:01:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75330 00:40:09.366 14:01:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:40:09.366 14:01:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:09.366 14:01:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75330 00:40:09.366 14:01:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:09.366 14:01:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:09.366 killing process with pid 75330 00:40:09.366 14:01:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75330' 00:40:09.366 14:01:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75330 00:40:09.366 14:01:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75330 00:40:09.366 14:01:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:40:09.366 [2024-11-20 14:00:49.653842] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:40:09.366 [2024-11-20 14:00:49.653926] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75330 ] 00:40:09.366 [2024-11-20 14:00:49.794214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:09.366 [2024-11-20 14:00:49.878236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:09.366 [2024-11-20 14:00:49.952686] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:09.366 Running I/O for 15 seconds... 00:40:09.366 7401.00 IOPS, 28.91 MiB/s [2024-11-20T14:01:06.689Z] [2024-11-20 14:00:51.884489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:65616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.366 [2024-11-20 14:00:51.884538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.366 [2024-11-20 14:00:51.884561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:65624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.366 [2024-11-20 14:00:51.884573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.366 [2024-11-20 14:00:51.884586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:65632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.366 [2024-11-20 14:00:51.884597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.366 [2024-11-20 14:00:51.884609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:65640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.366 [2024-11-20 14:00:51.884620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.366 [2024-11-20 14:00:51.884632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:65648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.366 [2024-11-20 14:00:51.884643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.366 [2024-11-20 14:00:51.884655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:65656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.366 [2024-11-20 14:00:51.884666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.366 [2024-11-20 14:00:51.884679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:65664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.366 [2024-11-20 14:00:51.884689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.366 [2024-11-20 14:00:51.884702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:65672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.366 [2024-11-20 14:00:51.884713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.366 [2024-11-20 14:00:51.884726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:65680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.366 [2024-11-20 14:00:51.884747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.366 [2024-11-20 14:00:51.884760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:65688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.366 [2024-11-20 14:00:51.884771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.366 [2024-11-20 14:00:51.884783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:65696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.366 [2024-11-20 14:00:51.884824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.366 [2024-11-20 14:00:51.884837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:65704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.366 [2024-11-20 14:00:51.884848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.366 [2024-11-20 14:00:51.884860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:65712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.366 [2024-11-20 14:00:51.884871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.366 [2024-11-20 14:00:51.884884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:65720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.366 [2024-11-20 14:00:51.884894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.366 [2024-11-20 14:00:51.884907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:65728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.366 [2024-11-20 14:00:51.884917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.366 [2024-11-20 14:00:51.884931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:65736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.366 [2024-11-20 14:00:51.884941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.367 [2024-11-20 14:00:51.884954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:65744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.367 [2024-11-20 14:00:51.884973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.367 [2024-11-20 14:00:51.884986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:65752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.367 [2024-11-20 14:00:51.884996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.367 [2024-11-20 14:00:51.885009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:65760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.367 [2024-11-20 14:00:51.885019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.367 [2024-11-20 14:00:51.885031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:65768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.367 [2024-11-20 14:00:51.885042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.367 [2024-11-20 14:00:51.885054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:65776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.367 [2024-11-20 14:00:51.885065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.367 [2024-11-20 14:00:51.885077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:65784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.367 [2024-11-20 14:00:51.885088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.367 [2024-11-20 14:00:51.885101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:65792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.367 [2024-11-20 14:00:51.885111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.367 [2024-11-20 14:00:51.885130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:65800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.367 [2024-11-20 14:00:51.885141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.367 [2024-11-20 14:00:51.885153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:65808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.367 [2024-11-20 14:00:51.885164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.367 [2024-11-20 14:00:51.885176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:65816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.367 [2024-11-20 14:00:51.885187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.367 [2024-11-20 14:00:51.885200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:65824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.367 [2024-11-20 14:00:51.885210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.367 [2024-11-20 14:00:51.885222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:65832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.367 [2024-11-20 14:00:51.885233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.367 [2024-11-20 14:00:51.885245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:65840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.367 [2024-11-20 14:00:51.885256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.367 [2024-11-20 14:00:51.885268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:65848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.367 [2024-11-20 14:00:51.885278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.367 [2024-11-20 14:00:51.885290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:65856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.367 [2024-11-20 14:00:51.885301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.367 [2024-11-20 14:00:51.885313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:65864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.367 [2024-11-20 14:00:51.885324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.367 [2024-11-20 14:00:51.885336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:65872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.367 [2024-11-20 14:00:51.885350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.367 [2024-11-20 14:00:51.885363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:65880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.367 [2024-11-20 14:00:51.885374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.367 [2024-11-20 14:00:51.885386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:65888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.367 [2024-11-20 14:00:51.885399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.367 [2024-11-20 14:00:51.885411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:65896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.367 [2024-11-20 14:00:51.885426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.367 [2024-11-20 14:00:51.885439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:65904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.367 [2024-11-20 14:00:51.885450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.367 [2024-11-20 14:00:51.885462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:65912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.367 [2024-11-20 14:00:51.885473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.367 [2024-11-20 14:00:51.885486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:65920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.367 [2024-11-20 14:00:51.885496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.367 [2024-11-20 14:00:51.885509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:65928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.367 [2024-11-20 14:00:51.885520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.367 [2024-11-20 14:00:51.885532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:65936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.367 [2024-11-20 14:00:51.885542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.367 [2024-11-20 14:00:51.885554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:65944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.367 [2024-11-20 14:00:51.885565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.367 [2024-11-20 14:00:51.885577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:65952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.367 [2024-11-20 14:00:51.885588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.367 [2024-11-20 14:00:51.885600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:65960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.367 [2024-11-20 14:00:51.885610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.367 [2024-11-20 14:00:51.885622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:65968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.367 [2024-11-20 14:00:51.885633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.367 [2024-11-20 14:00:51.885645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:65976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.367 [2024-11-20 14:00:51.885656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.367 [2024-11-20 14:00:51.885668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:65984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.367 [2024-11-20 14:00:51.885679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.367 [2024-11-20 14:00:51.885691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:65992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.367 [2024-11-20 14:00:51.885702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.367 [2024-11-20 14:00:51.885736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:66000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.367 [2024-11-20 14:00:51.885757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.367 [2024-11-20 14:00:51.885769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:66008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.367 [2024-11-20 14:00:51.885779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.367 [2024-11-20 14:00:51.885790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:66016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.367 [2024-11-20 14:00:51.885800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.367 [2024-11-20 14:00:51.885812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:66024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.367 [2024-11-20 14:00:51.885822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.367 [2024-11-20 14:00:51.885833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:66032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.367 [2024-11-20 14:00:51.885843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.367 [2024-11-20 14:00:51.885855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:66040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.367 [2024-11-20 14:00:51.885864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.367 [2024-11-20 14:00:51.885876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:66048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.367 [2024-11-20 14:00:51.885885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.367 [2024-11-20 14:00:51.885897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:66056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.368 [2024-11-20 14:00:51.885907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.368 [2024-11-20 14:00:51.885919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:66064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.368 [2024-11-20 14:00:51.885928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.368 [2024-11-20 14:00:51.885940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:66072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.368 [2024-11-20 14:00:51.885950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.368 [2024-11-20 14:00:51.885961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:66080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.368 [2024-11-20 14:00:51.885971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.368 [2024-11-20 14:00:51.885983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.368 [2024-11-20 14:00:51.885993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.368 [2024-11-20 14:00:51.886005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:66096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.368 [2024-11-20 14:00:51.886015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.368 [2024-11-20 14:00:51.886031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:66104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.368 [2024-11-20 14:00:51.886040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.368 [2024-11-20 14:00:51.886052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:66112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.368 [2024-11-20 14:00:51.886063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.368 [2024-11-20 14:00:51.886074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:66120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.368 [2024-11-20 14:00:51.886084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.368 [2024-11-20 14:00:51.886095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:66128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.368 [2024-11-20 14:00:51.886108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.368 [2024-11-20 14:00:51.886120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:66136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.368 [2024-11-20 14:00:51.886130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.368 [2024-11-20 14:00:51.886141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:66144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.368 [2024-11-20 14:00:51.886151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.368 [2024-11-20 14:00:51.886163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:66152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.368 [2024-11-20 14:00:51.886173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.368 [2024-11-20 14:00:51.886184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:66160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.368 [2024-11-20 14:00:51.886194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.368 [2024-11-20 14:00:51.886205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:66168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.368 [2024-11-20 14:00:51.886215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.368 [2024-11-20 14:00:51.886227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:66176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.368 [2024-11-20 14:00:51.886236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.368 [2024-11-20 14:00:51.886248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:66184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.368 [2024-11-20 14:00:51.886258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.368 [2024-11-20 14:00:51.886269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:66192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.368 [2024-11-20 14:00:51.886279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.368 [2024-11-20 14:00:51.886291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:66200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.368 [2024-11-20 14:00:51.886305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.368 [2024-11-20 14:00:51.886316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:66208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.368 [2024-11-20 14:00:51.886326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.368 [2024-11-20 14:00:51.886337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:66216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.368 [2024-11-20 14:00:51.886348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.368 [2024-11-20 14:00:51.886359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:66224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.368 [2024-11-20 14:00:51.886370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.368 [2024-11-20 14:00:51.886381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:66232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.368 [2024-11-20 14:00:51.886391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.368 [2024-11-20 14:00:51.886402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:66240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.368 [2024-11-20 14:00:51.886412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.368 [2024-11-20 14:00:51.886424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:66248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.368 [2024-11-20 14:00:51.886434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.368 [2024-11-20 14:00:51.886445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:66256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.368 [2024-11-20 14:00:51.886458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.368 [2024-11-20 14:00:51.886470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:66264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.368 [2024-11-20 14:00:51.886479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.368 [2024-11-20 14:00:51.886491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:66272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.368 [2024-11-20 14:00:51.886501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.368 [2024-11-20 14:00:51.886513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:66280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.368 [2024-11-20 14:00:51.886522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.368 [2024-11-20 14:00:51.886534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:66536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.368 [2024-11-20 14:00:51.886545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.368 [2024-11-20 14:00:51.886557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:66544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.368 [2024-11-20 14:00:51.886566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.368 [2024-11-20 14:00:51.886583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:66288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.368 [2024-11-20 14:00:51.886593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.368 [2024-11-20 14:00:51.886605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:66296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.368 [2024-11-20 14:00:51.886614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.368 [2024-11-20 14:00:51.886626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:66304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.368 [2024-11-20 14:00:51.886636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.368 [2024-11-20 14:00:51.886647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:66312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.368 [2024-11-20 14:00:51.886657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.368 [2024-11-20 14:00:51.886669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:66320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.368 [2024-11-20 14:00:51.886679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.368 [2024-11-20 14:00:51.886690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:66328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.368 [2024-11-20 14:00:51.886700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.368 [2024-11-20 14:00:51.886712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:66336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.368 [2024-11-20 14:00:51.886735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.368 [2024-11-20 14:00:51.886747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:66344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.368 [2024-11-20 14:00:51.886757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.369 [2024-11-20 14:00:51.886768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:66352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.369 [2024-11-20 14:00:51.886778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.369 [2024-11-20 14:00:51.886790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:66360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.369 [2024-11-20 14:00:51.886800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.369 [2024-11-20 14:00:51.886811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.369 [2024-11-20 14:00:51.886824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.369 [2024-11-20 14:00:51.886861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:66376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.369 [2024-11-20 14:00:51.886871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.369 [2024-11-20 14:00:51.886884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:66384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.369 [2024-11-20 14:00:51.886900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.369 [2024-11-20 14:00:51.886913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:66392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.369 [2024-11-20 14:00:51.886924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.369 [2024-11-20 14:00:51.886936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:66400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.369 [2024-11-20 14:00:51.886947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.369 [2024-11-20 14:00:51.886959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:66408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.369 [2024-11-20 14:00:51.886969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.369 [2024-11-20 14:00:51.886982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:66416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.369 [2024-11-20 14:00:51.886993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.369 [2024-11-20 14:00:51.887005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:66424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.369 [2024-11-20 14:00:51.887016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.369 [2024-11-20 14:00:51.887028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:66432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.369 [2024-11-20 14:00:51.887038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.369 [2024-11-20 14:00:51.887051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:66440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.369 [2024-11-20 14:00:51.887062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.369 [2024-11-20 14:00:51.887074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:66448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.369 [2024-11-20 14:00:51.887084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.369 [2024-11-20 14:00:51.887096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:66456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.369 [2024-11-20 14:00:51.887107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.369 [2024-11-20 14:00:51.887120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:66464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.369 [2024-11-20 14:00:51.887130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.369 [2024-11-20 14:00:51.887142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:66472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.369 [2024-11-20 14:00:51.887153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.369 [2024-11-20 14:00:51.887165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:66552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.369 [2024-11-20 14:00:51.887176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.369 [2024-11-20 14:00:51.887188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:66560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.369 [2024-11-20 14:00:51.887203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.369 [2024-11-20 14:00:51.887216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:66568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.369 [2024-11-20 14:00:51.887229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.369 [2024-11-20 14:00:51.887241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:66576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.369 [2024-11-20 14:00:51.887252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.369 [2024-11-20 14:00:51.887264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:66584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.369 [2024-11-20 14:00:51.887275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.369 [2024-11-20 14:00:51.887286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:66592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.369 [2024-11-20 14:00:51.887297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.369 [2024-11-20 14:00:51.887309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:66600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.369 [2024-11-20 14:00:51.887320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.369 [2024-11-20 14:00:51.887332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:66608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.369 [2024-11-20 14:00:51.887343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.369 [2024-11-20 14:00:51.887355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:66480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.369 [2024-11-20 14:00:51.887365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.369 [2024-11-20 14:00:51.887377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:66488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.369 [2024-11-20 14:00:51.887388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.369 [2024-11-20 14:00:51.887401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:66496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.369 [2024-11-20 14:00:51.887412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.369 [2024-11-20 14:00:51.887424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:66504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.369 [2024-11-20 14:00:51.887435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.369 [2024-11-20 14:00:51.887447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:66512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.369 [2024-11-20 14:00:51.887457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.369 [2024-11-20 14:00:51.887469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:66520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.369 [2024-11-20 14:00:51.887481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.369 [2024-11-20 14:00:51.887498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:66528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.369 [2024-11-20 14:00:51.887509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.369 [2024-11-20 14:00:51.887521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:66616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.369 [2024-11-20 14:00:51.887532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.369 [2024-11-20 14:00:51.887544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:66624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.369 [2024-11-20 14:00:51.887554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.369 [2024-11-20 14:00:51.887566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x793c20 is same with the state(6) to be set 00:40:09.369 [2024-11-20 14:00:51.887579] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:40:09.369 [2024-11-20 14:00:51.887587] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:40:09.369 [2024-11-20 14:00:51.887598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66632 len:8 PRP1 0x0 PRP2 0x0 00:40:09.369 [2024-11-20 14:00:51.887608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.369 [2024-11-20 14:00:51.887678] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:40:09.369 [2024-11-20 14:00:51.887739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:40:09.369 [2024-11-20 14:00:51.887754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.369 [2024-11-20 14:00:51.887765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:40:09.369 [2024-11-20 14:00:51.887776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.369 [2024-11-20 14:00:51.887787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:40:09.369 [2024-11-20 14:00:51.887798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.369 [2024-11-20 14:00:51.887809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:40:09.370 [2024-11-20 14:00:51.887819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.370 [2024-11-20 14:00:51.887830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:40:09.370 [2024-11-20 14:00:51.891280] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:40:09.370 [2024-11-20 14:00:51.891322] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f9710 (9): Bad file descriptor 00:40:09.370 [2024-11-20 14:00:51.915281] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:40:09.370 8546.00 IOPS, 33.38 MiB/s [2024-11-20T14:01:06.693Z] 9241.33 IOPS, 36.10 MiB/s [2024-11-20T14:01:06.693Z] 9434.75 IOPS, 36.85 MiB/s [2024-11-20T14:01:06.693Z] [2024-11-20 14:00:55.449192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:104008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.370 [2024-11-20 14:00:55.449283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.370 [2024-11-20 14:00:55.449340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:104016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.370 [2024-11-20 14:00:55.449352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.370 [2024-11-20 14:00:55.449366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:104024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.370 [2024-11-20 14:00:55.449378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.370 [2024-11-20 14:00:55.449390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:104032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.370 [2024-11-20 14:00:55.449402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.370 [2024-11-20 14:00:55.449414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:104040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.370 [2024-11-20 14:00:55.449426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.370 [2024-11-20 14:00:55.449439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:104048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.370 [2024-11-20 14:00:55.449450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.370 [2024-11-20 14:00:55.449464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:104056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.370 [2024-11-20 14:00:55.449475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.370 [2024-11-20 14:00:55.449489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:104064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.370 [2024-11-20 14:00:55.449500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.370 [2024-11-20 14:00:55.449514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:104072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.370 [2024-11-20 14:00:55.449526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.370 [2024-11-20 14:00:55.449539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:104080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.370 [2024-11-20 14:00:55.449551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.370 [2024-11-20 14:00:55.449564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:104088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.370 [2024-11-20 14:00:55.449574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.370 [2024-11-20 14:00:55.449586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:104096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.370 [2024-11-20 14:00:55.449597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.370 [2024-11-20 14:00:55.449609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:104104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.370 [2024-11-20 14:00:55.449631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.370 [2024-11-20 14:00:55.449643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:104112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.370 [2024-11-20 14:00:55.449661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.370 [2024-11-20 14:00:55.449674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:104120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.370 [2024-11-20 14:00:55.449684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.370 [2024-11-20 14:00:55.449698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:104128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.370 [2024-11-20 14:00:55.449731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.370 [2024-11-20 14:00:55.449745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:103432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.370 [2024-11-20 14:00:55.449756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.370 [2024-11-20 14:00:55.449771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:103440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.370 [2024-11-20 14:00:55.449783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.370 [2024-11-20 14:00:55.449796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:103448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.370 [2024-11-20 14:00:55.449807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.370 [2024-11-20 14:00:55.449820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:103456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.370 [2024-11-20 14:00:55.449833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.370 [2024-11-20 14:00:55.449847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:103464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.370 [2024-11-20 14:00:55.449859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.370 [2024-11-20 14:00:55.449872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:103472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.370 [2024-11-20 14:00:55.449884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.370 [2024-11-20 14:00:55.449897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:103480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.370 [2024-11-20 14:00:55.449909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.370 [2024-11-20 14:00:55.449921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:103488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.370 [2024-11-20 14:00:55.449933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.370 [2024-11-20 14:00:55.449946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:103496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.370 [2024-11-20 14:00:55.449958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.370 [2024-11-20 14:00:55.449971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.370 [2024-11-20 14:00:55.449983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.370 [2024-11-20 14:00:55.450004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:103512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.370 [2024-11-20 14:00:55.450016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.370 [2024-11-20 14:00:55.450030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:103520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.370 [2024-11-20 14:00:55.450042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.370 [2024-11-20 14:00:55.450055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:103528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.370 [2024-11-20 14:00:55.450067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.370 [2024-11-20 14:00:55.450080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:103536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.370 [2024-11-20 14:00:55.450090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.371 [2024-11-20 14:00:55.450103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:103544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.371 [2024-11-20 14:00:55.450115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.371 [2024-11-20 14:00:55.450128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.371 [2024-11-20 14:00:55.450140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.371 [2024-11-20 14:00:55.450153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:104136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.371 [2024-11-20 14:00:55.450166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.371 [2024-11-20 14:00:55.450179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:104144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.371 [2024-11-20 14:00:55.450190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.371 [2024-11-20 14:00:55.450204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:104152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.371 [2024-11-20 14:00:55.450217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.371 [2024-11-20 14:00:55.450230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:104160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.371 [2024-11-20 14:00:55.450242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.371 [2024-11-20 14:00:55.450256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:104168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.371 [2024-11-20 14:00:55.450267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.371 [2024-11-20 14:00:55.450281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:104176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.371 [2024-11-20 14:00:55.450293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.371 [2024-11-20 14:00:55.450305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:104184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.371 [2024-11-20 14:00:55.450325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.371 [2024-11-20 14:00:55.450338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:104192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.371 [2024-11-20 14:00:55.450350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.371 [2024-11-20 14:00:55.450362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:103560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.371 [2024-11-20 14:00:55.450374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.371 [2024-11-20 14:00:55.450388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:103568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.371 [2024-11-20 14:00:55.450399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.371 [2024-11-20 14:00:55.450412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:103576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.371 [2024-11-20 14:00:55.450429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.371 [2024-11-20 14:00:55.450444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:103584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.371 [2024-11-20 14:00:55.450457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.371 [2024-11-20 14:00:55.450470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:103592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.371 [2024-11-20 14:00:55.450483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.371 [2024-11-20 14:00:55.450495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.371 [2024-11-20 14:00:55.450506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.371 [2024-11-20 14:00:55.450518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:103608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.371 [2024-11-20 14:00:55.450530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.371 [2024-11-20 14:00:55.450543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:103616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.371 [2024-11-20 14:00:55.450554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.371 [2024-11-20 14:00:55.450567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:103624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.371 [2024-11-20 14:00:55.450579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.371 [2024-11-20 14:00:55.450592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:103632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.371 [2024-11-20 14:00:55.450604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.371 [2024-11-20 14:00:55.450618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:103640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.371 [2024-11-20 14:00:55.450629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.371 [2024-11-20 14:00:55.450648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:103648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.371 [2024-11-20 14:00:55.450661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.371 [2024-11-20 14:00:55.450674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:103656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.371 [2024-11-20 14:00:55.450684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.371 [2024-11-20 14:00:55.450697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:103664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.371 [2024-11-20 14:00:55.450718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.371 [2024-11-20 14:00:55.450732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:103672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.371 [2024-11-20 14:00:55.450744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.371 [2024-11-20 14:00:55.450758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:103680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.371 [2024-11-20 14:00:55.450769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.371 [2024-11-20 14:00:55.450782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:103688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.371 [2024-11-20 14:00:55.450795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.371 [2024-11-20 14:00:55.450808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.371 [2024-11-20 14:00:55.450819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.371 [2024-11-20 14:00:55.450840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:103704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.371 [2024-11-20 14:00:55.450851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.371 [2024-11-20 14:00:55.450877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:103712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.371 [2024-11-20 14:00:55.450889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.371 [2024-11-20 14:00:55.450903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:103720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.371 [2024-11-20 14:00:55.450914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.371 [2024-11-20 14:00:55.450927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:103728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.371 [2024-11-20 14:00:55.450938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.371 [2024-11-20 14:00:55.450950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:103736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.371 [2024-11-20 14:00:55.450962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.371 [2024-11-20 14:00:55.450975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:103744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.371 [2024-11-20 14:00:55.450986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.371 [2024-11-20 14:00:55.451004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:104200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.371 [2024-11-20 14:00:55.451016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.371 [2024-11-20 14:00:55.451030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:104208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.372 [2024-11-20 14:00:55.451041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.372 [2024-11-20 14:00:55.451056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:104216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.372 [2024-11-20 14:00:55.451068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.372 [2024-11-20 14:00:55.451080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:104224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.372 [2024-11-20 14:00:55.451093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.372 [2024-11-20 14:00:55.451105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:104232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.372 [2024-11-20 14:00:55.451116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.372 [2024-11-20 14:00:55.451128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:104240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.372 [2024-11-20 14:00:55.451139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.372 [2024-11-20 14:00:55.451153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:104248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.372 [2024-11-20 14:00:55.451165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.372 [2024-11-20 14:00:55.451178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:104256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.372 [2024-11-20 14:00:55.451189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.372 [2024-11-20 14:00:55.451201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:104264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.372 [2024-11-20 14:00:55.451212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.372 [2024-11-20 14:00:55.451226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:104272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.372 [2024-11-20 14:00:55.451239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.372 [2024-11-20 14:00:55.451252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:104280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.372 [2024-11-20 14:00:55.451264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.372 [2024-11-20 14:00:55.451278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:104288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.372 [2024-11-20 14:00:55.451290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.372 [2024-11-20 14:00:55.451302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.372 [2024-11-20 14:00:55.451319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.372 [2024-11-20 14:00:55.451333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:104304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.372 [2024-11-20 14:00:55.451344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.372 [2024-11-20 14:00:55.451359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:104312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.372 [2024-11-20 14:00:55.451371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.372 [2024-11-20 14:00:55.451383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:104320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.372 [2024-11-20 14:00:55.451402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.372 [2024-11-20 14:00:55.451416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.372 [2024-11-20 14:00:55.451427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.372 [2024-11-20 14:00:55.451441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:103760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.372 [2024-11-20 14:00:55.451453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.372 [2024-11-20 14:00:55.451465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:103768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.372 [2024-11-20 14:00:55.451476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.372 [2024-11-20 14:00:55.451491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:103776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.372 [2024-11-20 14:00:55.451502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.372 [2024-11-20 14:00:55.451516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:103784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.372 [2024-11-20 14:00:55.451527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.372 [2024-11-20 14:00:55.451540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:103792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.372 [2024-11-20 14:00:55.451553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.372 [2024-11-20 14:00:55.451567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:103800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.372 [2024-11-20 14:00:55.451579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.372 [2024-11-20 14:00:55.451591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:103808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.372 [2024-11-20 14:00:55.451602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.372 [2024-11-20 14:00:55.451615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:103816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.372 [2024-11-20 14:00:55.451627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.372 [2024-11-20 14:00:55.451645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:103824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.372 [2024-11-20 14:00:55.451657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.372 [2024-11-20 14:00:55.451672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:103832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.372 [2024-11-20 14:00:55.451684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.372 [2024-11-20 14:00:55.451697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:103840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.372 [2024-11-20 14:00:55.451718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.372 [2024-11-20 14:00:55.451732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:103848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.372 [2024-11-20 14:00:55.451743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.372 [2024-11-20 14:00:55.451755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:103856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.372 [2024-11-20 14:00:55.451767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.372 [2024-11-20 14:00:55.451782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:103864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.372 [2024-11-20 14:00:55.451794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.372 [2024-11-20 14:00:55.451808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:103872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.372 [2024-11-20 14:00:55.451819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.372 [2024-11-20 14:00:55.451834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:104328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.372 [2024-11-20 14:00:55.451846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.372 [2024-11-20 14:00:55.451858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:104336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.372 [2024-11-20 14:00:55.451869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.372 [2024-11-20 14:00:55.451883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:104344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.373 [2024-11-20 14:00:55.451896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.373 [2024-11-20 14:00:55.451910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:104352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.373 [2024-11-20 14:00:55.451921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.373 [2024-11-20 14:00:55.451936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:104360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.373 [2024-11-20 14:00:55.451947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.373 [2024-11-20 14:00:55.451962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:104368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.373 [2024-11-20 14:00:55.451980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.373 [2024-11-20 14:00:55.451993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:104376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.373 [2024-11-20 14:00:55.452004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.373 [2024-11-20 14:00:55.452023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.373 [2024-11-20 14:00:55.452036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.373 [2024-11-20 14:00:55.452050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:103880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.373 [2024-11-20 14:00:55.452062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.373 [2024-11-20 14:00:55.452075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:103888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.373 [2024-11-20 14:00:55.452086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.373 [2024-11-20 14:00:55.452099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:103896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.373 [2024-11-20 14:00:55.452110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.373 [2024-11-20 14:00:55.452124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:103904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.373 [2024-11-20 14:00:55.452135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.373 [2024-11-20 14:00:55.452148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:103912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.373 [2024-11-20 14:00:55.452159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.373 [2024-11-20 14:00:55.452172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.373 [2024-11-20 14:00:55.452183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.373 [2024-11-20 14:00:55.452196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:103928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.373 [2024-11-20 14:00:55.452208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.373 [2024-11-20 14:00:55.452220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:103936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.373 [2024-11-20 14:00:55.452232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.373 [2024-11-20 14:00:55.452246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:103944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.373 [2024-11-20 14:00:55.452257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.373 [2024-11-20 14:00:55.452270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:103952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.373 [2024-11-20 14:00:55.452284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.373 [2024-11-20 14:00:55.452302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:103960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.373 [2024-11-20 14:00:55.452315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.373 [2024-11-20 14:00:55.452328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:103968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.373 [2024-11-20 14:00:55.452339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.373 [2024-11-20 14:00:55.452353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:103976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.373 [2024-11-20 14:00:55.452364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.373 [2024-11-20 14:00:55.452377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:103984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.373 [2024-11-20 14:00:55.452389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.373 [2024-11-20 14:00:55.452404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:103992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.373 [2024-11-20 14:00:55.452415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.373 [2024-11-20 14:00:55.452427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x797af0 is same with the state(6) to be set 00:40:09.373 [2024-11-20 14:00:55.452452] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:40:09.373 [2024-11-20 14:00:55.452462] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:40:09.373 [2024-11-20 14:00:55.452471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104000 len:8 PRP1 0x0 PRP2 0x0 00:40:09.373 [2024-11-20 14:00:55.452482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.373 [2024-11-20 14:00:55.452495] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:40:09.373 [2024-11-20 14:00:55.452504] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:40:09.373 [2024-11-20 14:00:55.452512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104392 len:8 PRP1 0x0 PRP2 0x0 00:40:09.373 [2024-11-20 14:00:55.452523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.373 [2024-11-20 14:00:55.452546] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:40:09.373 [2024-11-20 14:00:55.452555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:40:09.373 [2024-11-20 14:00:55.452564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104400 len:8 PRP1 0x0 PRP2 0x0 00:40:09.373 [2024-11-20 14:00:55.452574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.373 [2024-11-20 14:00:55.452587] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:40:09.373 [2024-11-20 14:00:55.452596] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:40:09.373 [2024-11-20 14:00:55.452605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104408 len:8 PRP1 0x0 PRP2 0x0 00:40:09.373 [2024-11-20 14:00:55.452616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.373 [2024-11-20 14:00:55.452627] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:40:09.373 [2024-11-20 14:00:55.452636] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:40:09.373 [2024-11-20 14:00:55.452650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104416 len:8 PRP1 0x0 PRP2 0x0 00:40:09.373 [2024-11-20 14:00:55.452662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.373 [2024-11-20 14:00:55.452673] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:40:09.373 [2024-11-20 14:00:55.452682] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:40:09.373 [2024-11-20 14:00:55.452691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104424 len:8 PRP1 0x0 PRP2 0x0 00:40:09.373 [2024-11-20 14:00:55.452704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.373 [2024-11-20 14:00:55.452729] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:40:09.373 [2024-11-20 14:00:55.452738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:40:09.373 [2024-11-20 14:00:55.452746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104432 len:8 PRP1 0x0 PRP2 0x0 00:40:09.373 [2024-11-20 14:00:55.452765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.373 [2024-11-20 14:00:55.452777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:40:09.373 [2024-11-20 14:00:55.452786] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:40:09.373 [2024-11-20 14:00:55.452795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104440 len:8 PRP1 0x0 PRP2 0x0 00:40:09.373 [2024-11-20 14:00:55.452807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.373 [2024-11-20 14:00:55.452823] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:40:09.373 [2024-11-20 14:00:55.452832] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:40:09.373 [2024-11-20 14:00:55.452841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104448 len:8 PRP1 0x0 PRP2 0x0 00:40:09.373 [2024-11-20 14:00:55.452851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.373 [2024-11-20 14:00:55.452906] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:40:09.373 [2024-11-20 14:00:55.452969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:40:09.373 [2024-11-20 14:00:55.452983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.373 [2024-11-20 14:00:55.452996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:40:09.373 [2024-11-20 14:00:55.453008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.373 [2024-11-20 14:00:55.453020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:40:09.374 [2024-11-20 14:00:55.453031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.374 [2024-11-20 14:00:55.453043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:40:09.374 [2024-11-20 14:00:55.453055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.374 [2024-11-20 14:00:55.453068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:40:09.374 [2024-11-20 14:00:55.453112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f9710 (9): Bad file descriptor 00:40:09.374 [2024-11-20 14:00:55.455791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:40:09.374 [2024-11-20 14:00:55.478444] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:40:09.374 9437.00 IOPS, 36.86 MiB/s [2024-11-20T14:01:06.697Z] 9490.83 IOPS, 37.07 MiB/s [2024-11-20T14:01:06.697Z] 9480.14 IOPS, 37.03 MiB/s [2024-11-20T14:01:06.697Z] 9524.38 IOPS, 37.20 MiB/s [2024-11-20T14:01:06.697Z] 9624.11 IOPS, 37.59 MiB/s [2024-11-20T14:01:06.697Z] [2024-11-20 14:00:59.943166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:74488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.374 [2024-11-20 14:00:59.943246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.374 [2024-11-20 14:00:59.943269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:74496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.374 [2024-11-20 14:00:59.943280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.374 [2024-11-20 14:00:59.943293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:74504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.374 [2024-11-20 14:00:59.943303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.374 [2024-11-20 14:00:59.943315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:74512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.374 [2024-11-20 14:00:59.943325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.374 [2024-11-20 14:00:59.943336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.374 [2024-11-20 14:00:59.943347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.374 [2024-11-20 14:00:59.943359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:74528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.374 [2024-11-20 14:00:59.943369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.374 [2024-11-20 14:00:59.943381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:74536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.374 [2024-11-20 14:00:59.943391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.374 [2024-11-20 14:00:59.943402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:74544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.374 [2024-11-20 14:00:59.943412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.374 [2024-11-20 14:00:59.943423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.374 [2024-11-20 14:00:59.943434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.374 [2024-11-20 14:00:59.943446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:74560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.374 [2024-11-20 14:00:59.943455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.374 [2024-11-20 14:00:59.943467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:74568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.374 [2024-11-20 14:00:59.943477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.374 [2024-11-20 14:00:59.943526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.374 [2024-11-20 14:00:59.943536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.374 [2024-11-20 14:00:59.943548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:74584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.374 [2024-11-20 14:00:59.943558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.374 [2024-11-20 14:00:59.943570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:74592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.374 [2024-11-20 14:00:59.943580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.374 [2024-11-20 14:00:59.943593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:74600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.374 [2024-11-20 14:00:59.943603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.374 [2024-11-20 14:00:59.943615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:74608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.374 [2024-11-20 14:00:59.943626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.374 [2024-11-20 14:00:59.943637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:74104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.374 [2024-11-20 14:00:59.943647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.374 [2024-11-20 14:00:59.943663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.374 [2024-11-20 14:00:59.943674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.374 [2024-11-20 14:00:59.943686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.374 [2024-11-20 14:00:59.943699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.374 [2024-11-20 14:00:59.943724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:74128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.374 [2024-11-20 14:00:59.943734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.374 [2024-11-20 14:00:59.943746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:74136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.374 [2024-11-20 14:00:59.943756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.374 [2024-11-20 14:00:59.943768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:74144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.374 [2024-11-20 14:00:59.943779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.374 [2024-11-20 14:00:59.943790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:74152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.374 [2024-11-20 14:00:59.943801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.374 [2024-11-20 14:00:59.943813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:74160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.374 [2024-11-20 14:00:59.943831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.374 [2024-11-20 14:00:59.943843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:74616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.374 [2024-11-20 14:00:59.943854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.374 [2024-11-20 14:00:59.943866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:74624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.374 [2024-11-20 14:00:59.943876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.374 [2024-11-20 14:00:59.943889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:74632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.375 [2024-11-20 14:00:59.943899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.375 [2024-11-20 14:00:59.943911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:74640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.375 [2024-11-20 14:00:59.943920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.375 [2024-11-20 14:00:59.943932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:74648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.375 [2024-11-20 14:00:59.943942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.375 [2024-11-20 14:00:59.943953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:74656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.375 [2024-11-20 14:00:59.943963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.375 [2024-11-20 14:00:59.943974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:74664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.375 [2024-11-20 14:00:59.943984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.375 [2024-11-20 14:00:59.943996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:74672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.375 [2024-11-20 14:00:59.944006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.375 [2024-11-20 14:00:59.944017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:74680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.375 [2024-11-20 14:00:59.944027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.375 [2024-11-20 14:00:59.944039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:74688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.375 [2024-11-20 14:00:59.944049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.375 [2024-11-20 14:00:59.944061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:74696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.375 [2024-11-20 14:00:59.944074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.375 [2024-11-20 14:00:59.944086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:74704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.375 [2024-11-20 14:00:59.944096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.375 [2024-11-20 14:00:59.944113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:74712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.375 [2024-11-20 14:00:59.944123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.375 [2024-11-20 14:00:59.944135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:74720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.375 [2024-11-20 14:00:59.944148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.375 [2024-11-20 14:00:59.944161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:74728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.375 [2024-11-20 14:00:59.944173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.375 [2024-11-20 14:00:59.944184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:74736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.375 [2024-11-20 14:00:59.944194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.375 [2024-11-20 14:00:59.944207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:74168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.375 [2024-11-20 14:00:59.944217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.375 [2024-11-20 14:00:59.944228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:74176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.375 [2024-11-20 14:00:59.944238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.375 [2024-11-20 14:00:59.944250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:74184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.375 [2024-11-20 14:00:59.944263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.375 [2024-11-20 14:00:59.944277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:74192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.375 [2024-11-20 14:00:59.944291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.375 [2024-11-20 14:00:59.944305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:74200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.375 [2024-11-20 14:00:59.944319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.375 [2024-11-20 14:00:59.944337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:74208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.375 [2024-11-20 14:00:59.944348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.375 [2024-11-20 14:00:59.944359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:74216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.375 [2024-11-20 14:00:59.944369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.375 [2024-11-20 14:00:59.944381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:74224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.375 [2024-11-20 14:00:59.944391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.375 [2024-11-20 14:00:59.944403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:74744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.375 [2024-11-20 14:00:59.944413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.375 [2024-11-20 14:00:59.944433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:74752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.375 [2024-11-20 14:00:59.944443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.375 [2024-11-20 14:00:59.944457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:74760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.375 [2024-11-20 14:00:59.944467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.375 [2024-11-20 14:00:59.944479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:74768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.375 [2024-11-20 14:00:59.944491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.375 [2024-11-20 14:00:59.944504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:74776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.375 [2024-11-20 14:00:59.944515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.375 [2024-11-20 14:00:59.944526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:74784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.375 [2024-11-20 14:00:59.944537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.375 [2024-11-20 14:00:59.944548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:74792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.375 [2024-11-20 14:00:59.944558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.375 [2024-11-20 14:00:59.944569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:74800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.375 [2024-11-20 14:00:59.944581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.375 [2024-11-20 14:00:59.944595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:74808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.375 [2024-11-20 14:00:59.944605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.375 [2024-11-20 14:00:59.944617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:74816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.375 [2024-11-20 14:00:59.944628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.375 [2024-11-20 14:00:59.944640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:74824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.375 [2024-11-20 14:00:59.944650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.376 [2024-11-20 14:00:59.944663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:74832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.376 [2024-11-20 14:00:59.944673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.376 [2024-11-20 14:00:59.944684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:74840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.376 [2024-11-20 14:00:59.944694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.376 [2024-11-20 14:00:59.944716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:74848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.376 [2024-11-20 14:00:59.944733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.376 [2024-11-20 14:00:59.944744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:74856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.376 [2024-11-20 14:00:59.944755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.376 [2024-11-20 14:00:59.944768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:74864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.376 [2024-11-20 14:00:59.944778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.376 [2024-11-20 14:00:59.944790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:74872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.376 [2024-11-20 14:00:59.944800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.376 [2024-11-20 14:00:59.944812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:74880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.376 [2024-11-20 14:00:59.944822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.376 [2024-11-20 14:00:59.944836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:74888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.376 [2024-11-20 14:00:59.944848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.376 [2024-11-20 14:00:59.944861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:74896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.376 [2024-11-20 14:00:59.944876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.376 [2024-11-20 14:00:59.944889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:74232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.376 [2024-11-20 14:00:59.944900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.376 [2024-11-20 14:00:59.944914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:74240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.376 [2024-11-20 14:00:59.944924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.376 [2024-11-20 14:00:59.944937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:74248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.376 [2024-11-20 14:00:59.944947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.376 [2024-11-20 14:00:59.944958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:74256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.376 [2024-11-20 14:00:59.944970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.376 [2024-11-20 14:00:59.944981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:74264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.376 [2024-11-20 14:00:59.944991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.376 [2024-11-20 14:00:59.945002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:74272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.376 [2024-11-20 14:00:59.945014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.376 [2024-11-20 14:00:59.945032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:74280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.376 [2024-11-20 14:00:59.945042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.376 [2024-11-20 14:00:59.945055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:74288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.376 [2024-11-20 14:00:59.945066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.376 [2024-11-20 14:00:59.945078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:74296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.376 [2024-11-20 14:00:59.945089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.376 [2024-11-20 14:00:59.945105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:74304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.376 [2024-11-20 14:00:59.945115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.376 [2024-11-20 14:00:59.945128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:74312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.376 [2024-11-20 14:00:59.945138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.376 [2024-11-20 14:00:59.945149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:74320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.376 [2024-11-20 14:00:59.945160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.376 [2024-11-20 14:00:59.945172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:74328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.376 [2024-11-20 14:00:59.945182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.376 [2024-11-20 14:00:59.945193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:74336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.376 [2024-11-20 14:00:59.945206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.376 [2024-11-20 14:00:59.945218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:74344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.376 [2024-11-20 14:00:59.945227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.376 [2024-11-20 14:00:59.945238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:74352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.376 [2024-11-20 14:00:59.945249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.376 [2024-11-20 14:00:59.945260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:74904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.376 [2024-11-20 14:00:59.945270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.376 [2024-11-20 14:00:59.945283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:74912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.376 [2024-11-20 14:00:59.945294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.376 [2024-11-20 14:00:59.945309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:74920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.376 [2024-11-20 14:00:59.945323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.376 [2024-11-20 14:00:59.945335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:74928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.376 [2024-11-20 14:00:59.945345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.376 [2024-11-20 14:00:59.945357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:74936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.376 [2024-11-20 14:00:59.945367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.376 [2024-11-20 14:00:59.945378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:74944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.376 [2024-11-20 14:00:59.945390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.376 [2024-11-20 14:00:59.945401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:74952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.376 [2024-11-20 14:00:59.945411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.376 [2024-11-20 14:00:59.945424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:74960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.376 [2024-11-20 14:00:59.945434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.376 [2024-11-20 14:00:59.945446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:74968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.376 [2024-11-20 14:00:59.945458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.377 [2024-11-20 14:00:59.945473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:74976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.377 [2024-11-20 14:00:59.945483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.377 [2024-11-20 14:00:59.945497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:74984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.377 [2024-11-20 14:00:59.945509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.377 [2024-11-20 14:00:59.945520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:74992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.377 [2024-11-20 14:00:59.945529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.377 [2024-11-20 14:00:59.945548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:75000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.377 [2024-11-20 14:00:59.945558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.377 [2024-11-20 14:00:59.945569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:75008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.377 [2024-11-20 14:00:59.945579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.377 [2024-11-20 14:00:59.945593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:74360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.377 [2024-11-20 14:00:59.945603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.377 [2024-11-20 14:00:59.945614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:74368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.377 [2024-11-20 14:00:59.945631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.377 [2024-11-20 14:00:59.945642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:74376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.377 [2024-11-20 14:00:59.945656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.377 [2024-11-20 14:00:59.945668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:74384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.377 [2024-11-20 14:00:59.945680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.377 [2024-11-20 14:00:59.945692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:74392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.377 [2024-11-20 14:00:59.945703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.377 [2024-11-20 14:00:59.945723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.377 [2024-11-20 14:00:59.945733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.377 [2024-11-20 14:00:59.945745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:74408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.377 [2024-11-20 14:00:59.945755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.377 [2024-11-20 14:00:59.945767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x797e70 is same with the state(6) to be set 00:40:09.377 [2024-11-20 14:00:59.945780] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:40:09.377 [2024-11-20 14:00:59.945787] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:40:09.377 [2024-11-20 14:00:59.945795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74416 len:8 PRP1 0x0 PRP2 0x0 00:40:09.377 [2024-11-20 14:00:59.945805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.377 [2024-11-20 14:00:59.945818] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:40:09.377 [2024-11-20 14:00:59.945826] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:40:09.377 [2024-11-20 14:00:59.945834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75016 len:8 PRP1 0x0 PRP2 0x0 00:40:09.377 [2024-11-20 14:00:59.945844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.377 [2024-11-20 14:00:59.945854] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:40:09.377 [2024-11-20 14:00:59.945862] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:40:09.377 [2024-11-20 14:00:59.945870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75024 len:8 PRP1 0x0 PRP2 0x0 00:40:09.377 [2024-11-20 14:00:59.945882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.377 [2024-11-20 14:00:59.945894] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:40:09.377 [2024-11-20 14:00:59.945912] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:40:09.377 [2024-11-20 14:00:59.945922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75032 len:8 PRP1 0x0 PRP2 0x0 00:40:09.377 [2024-11-20 14:00:59.945934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.377 [2024-11-20 14:00:59.945952] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:40:09.377 [2024-11-20 14:00:59.945959] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:40:09.377 [2024-11-20 14:00:59.945967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75040 len:8 PRP1 0x0 PRP2 0x0 00:40:09.377 [2024-11-20 14:00:59.945978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.377 [2024-11-20 14:00:59.945988] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:40:09.377 [2024-11-20 14:00:59.945995] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:40:09.377 [2024-11-20 14:00:59.946003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75048 len:8 PRP1 0x0 PRP2 0x0 00:40:09.377 [2024-11-20 14:00:59.946012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.377 [2024-11-20 14:00:59.946022] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:40:09.377 [2024-11-20 14:00:59.946031] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:40:09.377 [2024-11-20 14:00:59.946038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75056 len:8 PRP1 0x0 PRP2 0x0 00:40:09.377 [2024-11-20 14:00:59.946048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.377 [2024-11-20 14:00:59.946059] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:40:09.377 [2024-11-20 14:00:59.946067] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:40:09.377 [2024-11-20 14:00:59.946075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75064 len:8 PRP1 0x0 PRP2 0x0 00:40:09.377 [2024-11-20 14:00:59.946084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.377 [2024-11-20 14:00:59.946096] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:40:09.377 [2024-11-20 14:00:59.946104] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:40:09.377 [2024-11-20 14:00:59.946114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75072 len:8 PRP1 0x0 PRP2 0x0 00:40:09.377 [2024-11-20 14:00:59.946123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.377 [2024-11-20 14:00:59.946135] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:40:09.377 [2024-11-20 14:00:59.946143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:40:09.377 [2024-11-20 14:00:59.946150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75080 len:8 PRP1 0x0 PRP2 0x0 00:40:09.377 [2024-11-20 14:00:59.946159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.377 [2024-11-20 14:00:59.946170] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:40:09.377 [2024-11-20 14:00:59.946178] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:40:09.377 [2024-11-20 14:00:59.946186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75088 len:8 PRP1 0x0 PRP2 0x0 00:40:09.377 [2024-11-20 14:00:59.946195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.377 [2024-11-20 14:00:59.946204] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:40:09.377 [2024-11-20 14:00:59.946218] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:40:09.377 [2024-11-20 14:00:59.946227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75096 len:8 PRP1 0x0 PRP2 0x0 00:40:09.377 [2024-11-20 14:00:59.946241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.377 [2024-11-20 14:00:59.946252] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:40:09.377 [2024-11-20 14:00:59.946260] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:40:09.377 [2024-11-20 14:00:59.946268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75104 len:8 PRP1 0x0 PRP2 0x0 00:40:09.377 [2024-11-20 14:00:59.946277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.377 [2024-11-20 14:00:59.946289] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:40:09.377 [2024-11-20 14:00:59.946298] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:40:09.378 [2024-11-20 14:00:59.946307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75112 len:8 PRP1 0x0 PRP2 0x0 00:40:09.378 [2024-11-20 14:00:59.946318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.378 [2024-11-20 14:00:59.946328] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:40:09.378 [2024-11-20 14:00:59.946337] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:40:09.378 [2024-11-20 14:00:59.946344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75120 len:8 PRP1 0x0 PRP2 0x0 00:40:09.378 [2024-11-20 14:00:59.946354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.378 [2024-11-20 14:00:59.946364] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:40:09.378 [2024-11-20 14:00:59.946372] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:40:09.378 [2024-11-20 14:00:59.946380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74424 len:8 PRP1 0x0 PRP2 0x0 00:40:09.378 [2024-11-20 14:00:59.946389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.378 [2024-11-20 14:00:59.946399] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:40:09.378 [2024-11-20 14:00:59.946406] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:40:09.378 [2024-11-20 14:00:59.946415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74432 len:8 PRP1 0x0 PRP2 0x0 00:40:09.378 [2024-11-20 14:00:59.946425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.378 [2024-11-20 14:00:59.946435] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:40:09.378 [2024-11-20 14:00:59.946442] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:40:09.378 [2024-11-20 14:00:59.946451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74440 len:8 PRP1 0x0 PRP2 0x0 00:40:09.378 [2024-11-20 14:00:59.946460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.378 [2024-11-20 14:00:59.946470] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:40:09.378 [2024-11-20 14:00:59.946479] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:40:09.378 [2024-11-20 14:00:59.946487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74448 len:8 PRP1 0x0 PRP2 0x0 00:40:09.378 [2024-11-20 14:00:59.946498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.378 [2024-11-20 14:00:59.946509] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:40:09.378 [2024-11-20 14:00:59.946521] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:40:09.378 [2024-11-20 14:00:59.946534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74456 len:8 PRP1 0x0 PRP2 0x0 00:40:09.378 [2024-11-20 14:00:59.946544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.378 [2024-11-20 14:00:59.946555] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:40:09.378 [2024-11-20 14:00:59.946562] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:40:09.378 [2024-11-20 14:00:59.946570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74464 len:8 PRP1 0x0 PRP2 0x0 00:40:09.378 [2024-11-20 14:00:59.946579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.378 [2024-11-20 14:00:59.946589] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:40:09.378 [2024-11-20 14:00:59.946598] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:40:09.378 [2024-11-20 14:00:59.946606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74472 len:8 PRP1 0x0 PRP2 0x0 00:40:09.378 [2024-11-20 14:00:59.946616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.378 [2024-11-20 14:00:59.946625] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:40:09.378 [2024-11-20 14:00:59.946634] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:40:09.378 [2024-11-20 14:00:59.946642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74480 len:8 PRP1 0x0 PRP2 0x0 00:40:09.378 [2024-11-20 14:00:59.963644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.378 [2024-11-20 14:00:59.963753] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:40:09.378 [2024-11-20 14:00:59.963832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:40:09.378 [2024-11-20 14:00:59.963851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.378 [2024-11-20 14:00:59.963868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:40:09.378 [2024-11-20 14:00:59.963882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.378 [2024-11-20 14:00:59.963897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:40:09.378 [2024-11-20 14:00:59.963911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.378 [2024-11-20 14:00:59.963927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:40:09.378 [2024-11-20 14:00:59.963940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.378 [2024-11-20 14:00:59.963954] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:40:09.378 [2024-11-20 14:00:59.964011] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f9710 (9): Bad file descriptor 00:40:09.378 [2024-11-20 14:00:59.968404] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:40:09.378 [2024-11-20 14:00:59.996258] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:40:09.378 9560.40 IOPS, 37.35 MiB/s [2024-11-20T14:01:06.701Z] 9516.73 IOPS, 37.17 MiB/s [2024-11-20T14:01:06.701Z] 9479.67 IOPS, 37.03 MiB/s [2024-11-20T14:01:06.701Z] 9440.31 IOPS, 36.88 MiB/s [2024-11-20T14:01:06.701Z] 9518.57 IOPS, 37.18 MiB/s [2024-11-20T14:01:06.701Z] 9607.73 IOPS, 37.53 MiB/s 00:40:09.378 Latency(us) 00:40:09.378 [2024-11-20T14:01:06.701Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:09.378 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:40:09.378 Verification LBA range: start 0x0 length 0x4000 00:40:09.378 NVMe0n1 : 15.01 9608.15 37.53 257.24 0.00 12950.72 420.33 25870.98 00:40:09.378 [2024-11-20T14:01:06.701Z] =================================================================================================================== 00:40:09.378 [2024-11-20T14:01:06.701Z] Total : 9608.15 37.53 257.24 0.00 12950.72 420.33 25870.98 00:40:09.378 Received shutdown signal, test time was about 15.000000 seconds 00:40:09.378 00:40:09.378 Latency(us) 00:40:09.378 [2024-11-20T14:01:06.701Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:09.378 [2024-11-20T14:01:06.701Z] =================================================================================================================== 00:40:09.378 [2024-11-20T14:01:06.701Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:09.378 14:01:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:40:09.378 14:01:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:40:09.378 14:01:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:40:09.378 14:01:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:40:09.378 14:01:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=75524 00:40:09.378 14:01:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 75524 /var/tmp/bdevperf.sock 00:40:09.378 14:01:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75524 ']' 00:40:09.378 14:01:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:40:09.378 14:01:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:09.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:40:09.378 14:01:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:40:09.378 14:01:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:09.378 14:01:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:40:09.951 14:01:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:09.951 14:01:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:40:09.951 14:01:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:40:09.951 [2024-11-20 14:01:07.151309] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:40:09.951 14:01:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:40:10.210 [2024-11-20 14:01:07.355073] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:40:10.210 14:01:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:40:10.469 NVMe0n1 00:40:10.470 14:01:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:40:10.728 00:40:10.728 14:01:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:40:10.986 00:40:10.986 14:01:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:40:10.986 14:01:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:40:11.245 14:01:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:40:11.505 14:01:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:40:14.817 14:01:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:40:14.817 14:01:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:40:14.817 14:01:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=75601 00:40:14.817 14:01:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:40:14.817 14:01:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 75601 00:40:16.196 { 00:40:16.196 "results": [ 00:40:16.196 { 00:40:16.196 "job": "NVMe0n1", 00:40:16.196 "core_mask": "0x1", 00:40:16.196 "workload": "verify", 00:40:16.196 "status": "finished", 00:40:16.196 "verify_range": { 00:40:16.196 "start": 0, 00:40:16.196 "length": 16384 00:40:16.196 }, 00:40:16.196 "queue_depth": 128, 00:40:16.196 "io_size": 4096, 00:40:16.196 "runtime": 1.011718, 00:40:16.196 "iops": 7694.831959103229, 00:40:16.196 "mibps": 30.057937340246987, 00:40:16.196 "io_failed": 0, 00:40:16.196 "io_timeout": 0, 00:40:16.196 "avg_latency_us": 16584.884392502656, 00:40:16.196 "min_latency_us": 1516.7720524017468, 00:40:16.196 "max_latency_us": 14767.063755458516 00:40:16.196 } 00:40:16.196 ], 00:40:16.196 "core_count": 1 00:40:16.196 } 00:40:16.196 14:01:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:40:16.196 [2024-11-20 14:01:06.042714] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:40:16.196 [2024-11-20 14:01:06.042824] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75524 ] 00:40:16.196 [2024-11-20 14:01:06.189128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:16.196 [2024-11-20 14:01:06.236123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:16.196 [2024-11-20 14:01:06.283997] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:16.196 [2024-11-20 14:01:08.711747] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:40:16.196 [2024-11-20 14:01:08.711867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:40:16.196 [2024-11-20 14:01:08.711884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.196 [2024-11-20 14:01:08.711897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:40:16.196 [2024-11-20 14:01:08.711907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.196 [2024-11-20 14:01:08.711919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:40:16.196 [2024-11-20 14:01:08.711931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.196 [2024-11-20 14:01:08.711942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:40:16.196 [2024-11-20 14:01:08.711953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.196 [2024-11-20 14:01:08.711963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:40:16.196 [2024-11-20 14:01:08.712005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:40:16.196 [2024-11-20 14:01:08.712027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1866710 (9): Bad file descriptor 00:40:16.197 [2024-11-20 14:01:08.714749] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:40:16.197 Running I/O for 1 seconds... 00:40:16.197 7657.00 IOPS, 29.91 MiB/s 00:40:16.197 Latency(us) 00:40:16.197 [2024-11-20T14:01:13.520Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:16.197 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:40:16.197 Verification LBA range: start 0x0 length 0x4000 00:40:16.197 NVMe0n1 : 1.01 7694.83 30.06 0.00 0.00 16584.88 1516.77 14767.06 00:40:16.197 [2024-11-20T14:01:13.520Z] =================================================================================================================== 00:40:16.197 [2024-11-20T14:01:13.520Z] Total : 7694.83 30.06 0.00 0.00 16584.88 1516.77 14767.06 00:40:16.197 14:01:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:40:16.197 14:01:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:40:16.197 14:01:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:40:16.456 14:01:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:40:16.456 14:01:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:40:16.715 14:01:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:40:16.974 14:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:40:20.263 14:01:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:40:20.263 14:01:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:40:20.263 14:01:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 75524 00:40:20.263 14:01:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75524 ']' 00:40:20.263 14:01:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75524 00:40:20.263 14:01:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:40:20.263 14:01:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:20.263 14:01:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75524 00:40:20.263 14:01:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:20.263 14:01:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:20.263 killing process with pid 75524 00:40:20.263 14:01:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75524' 00:40:20.263 14:01:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75524 00:40:20.263 14:01:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75524 00:40:20.521 14:01:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:40:20.521 14:01:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:20.779 14:01:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:40:20.779 14:01:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:40:20.779 14:01:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:40:20.779 14:01:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:20.779 14:01:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:40:20.779 14:01:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:20.779 14:01:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:40:20.779 14:01:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:20.779 14:01:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:20.779 rmmod nvme_tcp 00:40:20.779 rmmod nvme_fabrics 00:40:20.779 rmmod nvme_keyring 00:40:20.780 14:01:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:20.780 14:01:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:40:20.780 14:01:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:40:20.780 14:01:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 75273 ']' 00:40:20.780 14:01:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 75273 00:40:20.780 14:01:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75273 ']' 00:40:20.780 14:01:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75273 00:40:20.780 14:01:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:40:20.780 14:01:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:20.780 14:01:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75273 00:40:20.780 14:01:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:20.780 14:01:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:20.780 killing process with pid 75273 00:40:20.780 14:01:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75273' 00:40:20.780 14:01:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75273 00:40:20.780 14:01:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75273 00:40:21.039 14:01:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:21.039 14:01:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:21.039 14:01:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:21.039 14:01:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:40:21.039 14:01:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:40:21.039 14:01:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:40:21.039 14:01:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:21.039 14:01:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:21.039 14:01:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:40:21.039 14:01:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:40:21.039 14:01:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:40:21.039 14:01:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:40:21.039 14:01:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:40:21.039 14:01:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:40:21.299 14:01:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:40:21.299 14:01:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:40:21.299 14:01:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:40:21.299 14:01:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:40:21.299 14:01:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:40:21.299 14:01:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:40:21.299 14:01:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:40:21.299 14:01:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:40:21.299 14:01:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:40:21.299 14:01:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:21.299 14:01:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:21.299 14:01:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:21.299 14:01:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:40:21.299 00:40:21.299 real 0m32.270s 00:40:21.299 user 2m3.370s 00:40:21.299 sys 0m5.243s 00:40:21.299 14:01:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:21.299 14:01:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:40:21.299 ************************************ 00:40:21.299 END TEST nvmf_failover 00:40:21.299 ************************************ 00:40:21.299 14:01:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:40:21.299 14:01:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:40:21.299 14:01:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:21.299 14:01:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:40:21.560 ************************************ 00:40:21.560 START TEST nvmf_host_discovery 00:40:21.560 ************************************ 00:40:21.560 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:40:21.560 * Looking for test storage... 00:40:21.560 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:40:21.560 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:21.560 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:40:21.560 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:21.560 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:21.560 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:21.560 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:21.560 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:21.560 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:40:21.560 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:40:21.560 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:40:21.560 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:40:21.560 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:40:21.560 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:40:21.560 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:40:21.560 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:21.560 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:40:21.560 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:40:21.560 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:21.560 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:21.560 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:40:21.560 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:40:21.560 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:21.560 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:40:21.560 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:40:21.560 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:40:21.560 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:40:21.560 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:21.560 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:40:21.560 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:40:21.560 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:21.560 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:21.560 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:40:21.560 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:21.560 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:21.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:21.560 --rc genhtml_branch_coverage=1 00:40:21.560 --rc genhtml_function_coverage=1 00:40:21.560 --rc genhtml_legend=1 00:40:21.560 --rc geninfo_all_blocks=1 00:40:21.560 --rc geninfo_unexecuted_blocks=1 00:40:21.560 00:40:21.560 ' 00:40:21.560 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:21.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:21.560 --rc genhtml_branch_coverage=1 00:40:21.560 --rc genhtml_function_coverage=1 00:40:21.560 --rc genhtml_legend=1 00:40:21.560 --rc geninfo_all_blocks=1 00:40:21.560 --rc geninfo_unexecuted_blocks=1 00:40:21.560 00:40:21.560 ' 00:40:21.560 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:21.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:21.560 --rc genhtml_branch_coverage=1 00:40:21.560 --rc genhtml_function_coverage=1 00:40:21.560 --rc genhtml_legend=1 00:40:21.560 --rc geninfo_all_blocks=1 00:40:21.560 --rc geninfo_unexecuted_blocks=1 00:40:21.560 00:40:21.560 ' 00:40:21.560 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:21.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:21.560 --rc genhtml_branch_coverage=1 00:40:21.560 --rc genhtml_function_coverage=1 00:40:21.560 --rc genhtml_legend=1 00:40:21.560 --rc geninfo_all_blocks=1 00:40:21.560 --rc geninfo_unexecuted_blocks=1 00:40:21.560 00:40:21.560 ' 00:40:21.560 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:40:21.560 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:40:21.560 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:21.560 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:21.560 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:21.560 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:21.560 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:21.560 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:21.560 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:21.560 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:21.560 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:21.561 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:21.821 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:40:21.821 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=105ec898-1662-46bd-85be-b241e399edb9 00:40:21.821 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:21.821 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:21.821 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:40:21.821 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:21.821 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:40:21.821 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:40:21.821 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:21.821 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:21.821 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:21.821 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:21.821 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:21.821 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:21.821 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:40:21.821 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:21.821 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:40:21.821 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:21.821 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:21.821 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:21.821 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:21.821 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:21.821 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:21.821 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:21.821 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:21.821 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:21.821 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:21.821 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:40:21.821 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:40:21.821 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:40:21.822 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:40:21.822 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:40:21.822 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:40:21.822 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:40:21.822 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:21.822 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:21.822 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:21.822 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:21.822 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:21.822 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:21.822 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:21.822 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:21.822 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:40:21.822 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:40:21.822 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:40:21.822 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:40:21.822 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:40:21.822 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:40:21.822 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:21.822 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:40:21.822 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:40:21.822 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:40:21.822 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:21.822 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:40:21.822 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:40:21.822 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:40:21.822 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:40:21.822 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:40:21.822 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:40:21.822 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:21.822 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:40:21.822 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:40:21.822 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:40:21.822 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:40:21.822 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:40:21.822 Cannot find device "nvmf_init_br" 00:40:21.822 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:40:21.822 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:40:21.822 Cannot find device "nvmf_init_br2" 00:40:21.822 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:40:21.822 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:40:21.822 Cannot find device "nvmf_tgt_br" 00:40:21.822 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:40:21.822 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:40:21.822 Cannot find device "nvmf_tgt_br2" 00:40:21.822 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:40:21.822 14:01:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:40:21.822 Cannot find device "nvmf_init_br" 00:40:21.822 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:40:21.822 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:40:21.822 Cannot find device "nvmf_init_br2" 00:40:21.822 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:40:21.822 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:40:21.822 Cannot find device "nvmf_tgt_br" 00:40:21.822 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:40:21.822 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:40:21.822 Cannot find device "nvmf_tgt_br2" 00:40:21.822 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:40:21.822 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:40:21.822 Cannot find device "nvmf_br" 00:40:21.822 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:40:21.822 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:40:21.822 Cannot find device "nvmf_init_if" 00:40:21.822 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:40:21.822 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:40:21.822 Cannot find device "nvmf_init_if2" 00:40:21.822 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:40:21.822 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:40:21.822 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:40:21.822 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:40:21.822 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:40:21.822 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:40:21.822 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:40:21.822 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:40:21.822 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:40:21.822 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:40:21.822 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:40:22.081 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:40:22.081 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:40:22.081 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:40:22.081 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:40:22.081 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:40:22.081 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:40:22.081 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:40:22.081 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:40:22.081 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:40:22.081 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:40:22.081 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:40:22.082 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:40:22.082 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:40:22.082 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:40:22.082 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:40:22.082 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:40:22.082 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:40:22.082 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:40:22.082 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:40:22.082 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:40:22.082 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:40:22.082 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:40:22.082 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:40:22.082 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:40:22.082 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:40:22.082 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:40:22.082 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:40:22.082 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:40:22.082 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:40:22.082 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:40:22.082 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.123 ms 00:40:22.082 00:40:22.082 --- 10.0.0.3 ping statistics --- 00:40:22.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:22.082 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:40:22.082 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:40:22.082 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:40:22.082 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.091 ms 00:40:22.082 00:40:22.082 --- 10.0.0.4 ping statistics --- 00:40:22.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:22.082 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:40:22.082 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:40:22.082 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:22.082 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:40:22.082 00:40:22.082 --- 10.0.0.1 ping statistics --- 00:40:22.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:22.082 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:40:22.082 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:40:22.082 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:22.082 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:40:22.082 00:40:22.082 --- 10.0.0.2 ping statistics --- 00:40:22.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:22.082 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:40:22.082 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:22.082 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:40:22.082 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:22.082 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:22.082 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:22.082 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:22.082 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:22.082 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:22.082 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:22.082 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:40:22.082 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:22.082 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:22.082 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:22.082 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=75925 00:40:22.082 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:40:22.082 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 75925 00:40:22.082 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 75925 ']' 00:40:22.082 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:22.082 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:22.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:22.082 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:22.082 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:22.082 14:01:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:22.341 [2024-11-20 14:01:19.444133] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:40:22.342 [2024-11-20 14:01:19.444195] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:22.342 [2024-11-20 14:01:19.596754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:22.342 [2024-11-20 14:01:19.651794] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:22.342 [2024-11-20 14:01:19.651843] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:22.342 [2024-11-20 14:01:19.651849] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:22.342 [2024-11-20 14:01:19.651854] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:22.342 [2024-11-20 14:01:19.651858] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:22.342 [2024-11-20 14:01:19.652161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:22.601 [2024-11-20 14:01:19.716594] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:23.170 14:01:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:23.170 14:01:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:40:23.170 14:01:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:23.170 14:01:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:23.170 14:01:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:23.170 14:01:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:23.170 14:01:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:23.170 14:01:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:23.170 14:01:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:23.170 [2024-11-20 14:01:20.425559] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:23.170 14:01:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:23.170 14:01:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:40:23.170 14:01:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:23.170 14:01:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:23.171 [2024-11-20 14:01:20.437665] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:40:23.171 14:01:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:23.171 14:01:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:40:23.171 14:01:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:23.171 14:01:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:23.171 null0 00:40:23.171 14:01:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:23.171 14:01:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:40:23.171 14:01:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:23.171 14:01:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:23.171 null1 00:40:23.171 14:01:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:23.171 14:01:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:40:23.171 14:01:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:23.171 14:01:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:23.171 14:01:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:23.171 14:01:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=75957 00:40:23.171 14:01:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:40:23.171 14:01:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 75957 /tmp/host.sock 00:40:23.171 14:01:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 75957 ']' 00:40:23.171 14:01:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:40:23.171 14:01:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:23.171 14:01:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:40:23.171 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:40:23.171 14:01:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:23.171 14:01:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:23.431 [2024-11-20 14:01:20.537005] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:40:23.431 [2024-11-20 14:01:20.537142] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75957 ] 00:40:23.431 [2024-11-20 14:01:20.689783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:23.690 [2024-11-20 14:01:20.758979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:23.690 [2024-11-20 14:01:20.829099] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:24.261 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:24.261 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:40:24.261 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:24.261 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:40:24.261 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:24.261 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:24.261 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:24.261 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:40:24.261 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:24.261 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:24.261 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:24.261 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:40:24.261 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:40:24.261 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:40:24.261 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:24.261 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:40:24.261 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:24.261 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:40:24.261 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:40:24.261 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:24.261 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:40:24.261 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:40:24.261 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:40:24.261 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:24.261 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:24.261 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:40:24.261 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:40:24.261 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:40:24.261 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:24.261 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:40:24.261 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:40:24.261 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:24.261 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:24.526 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:24.526 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:40:24.526 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:40:24.526 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:40:24.526 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:40:24.526 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:24.526 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:40:24.526 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:24.526 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:24.526 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:40:24.527 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:40:24.527 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:40:24.527 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:24.527 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:40:24.527 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:24.527 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:40:24.527 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:40:24.527 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:24.527 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:40:24.527 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:40:24.527 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:24.527 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:24.527 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:24.527 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:40:24.527 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:40:24.527 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:40:24.527 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:40:24.527 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:24.527 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:24.527 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:40:24.527 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:24.527 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:40:24.527 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:40:24.527 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:40:24.527 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:24.527 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:24.527 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:40:24.527 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:40:24.527 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:40:24.527 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:24.527 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:40:24.527 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:40:24.527 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:24.527 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:24.527 [2024-11-20 14:01:21.827214] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:40:24.527 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:24.527 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:40:24.527 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:40:24.527 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:40:24.527 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:40:24.527 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:24.527 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:24.527 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:40:24.786 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:24.786 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:40:24.786 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:40:24.786 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:40:24.786 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:40:24.786 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:24.786 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:40:24.786 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:24.786 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:40:24.786 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:24.786 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:40:24.786 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:40:24.786 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:40:24.786 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:40:24.786 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:40:24.786 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:40:24.786 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:40:24.786 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:40:24.786 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:40:24.786 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:40:24.786 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:24.786 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:24.786 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:40:24.786 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:24.786 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:40:24.786 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:40:24.786 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:40:24.786 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:40:24.786 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:40:24.786 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:24.786 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:24.786 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:24.786 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:40:24.786 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:40:24.786 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:40:24.786 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:40:24.786 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:40:24.786 14:01:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:40:24.787 14:01:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:40:24.787 14:01:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:40:24.787 14:01:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:40:24.787 14:01:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:24.787 14:01:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:24.787 14:01:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:40:24.787 14:01:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:24.787 14:01:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:40:24.787 14:01:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:40:25.354 [2024-11-20 14:01:22.474451] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:40:25.354 [2024-11-20 14:01:22.474480] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:40:25.354 [2024-11-20 14:01:22.474500] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:40:25.354 [2024-11-20 14:01:22.480468] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:40:25.354 [2024-11-20 14:01:22.534665] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:40:25.354 [2024-11-20 14:01:22.535575] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x246de60:1 started. 00:40:25.354 [2024-11-20 14:01:22.537319] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:40:25.354 [2024-11-20 14:01:22.537403] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:40:25.354 [2024-11-20 14:01:22.542988] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x246de60 was disconnected and freed. delete nvme_qpair. 00:40:25.922 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:40:25.922 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:40:25.922 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:40:25.922 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:40:25.922 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:40:25.922 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:40:25.922 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:25.922 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:25.922 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:40:25.922 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:25.922 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:25.922 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:40:25.922 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:40:25.922 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:40:25.922 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:40:25.922 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:40:25.922 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:40:25.922 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:40:25.922 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:40:25.922 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:40:25.922 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:25.922 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:40:25.922 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:25.922 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:40:25.922 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:25.922 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:40:25.922 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:40:25.922 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:40:25.922 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:40:25.922 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:40:25.922 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:40:25.922 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:40:25.922 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:40:25.922 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:40:25.922 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:25.922 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:25.922 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:40:25.922 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:40:25.922 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:40:25.922 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:25.922 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:40:25.922 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:40:25.922 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:40:25.922 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:40:25.922 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:40:25.922 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:40:25.922 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:40:25.922 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:40:25.922 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:40:25.922 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:40:25.922 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:40:25.922 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:25.922 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:25.922 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:40:25.922 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:26.182 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:40:26.182 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:40:26.183 [2024-11-20 14:01:23.274915] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x24464a0:1 started. 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:40:26.183 [2024-11-20 14:01:23.281875] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x24464a0 was disconnected and freed. delete nvme_qpair. 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:26.183 [2024-11-20 14:01:23.386284] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:40:26.183 [2024-11-20 14:01:23.386737] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:40:26.183 [2024-11-20 14:01:23.386766] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:40:26.183 [2024-11-20 14:01:23.392723] bdev_nvme.c:7403:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:40:26.183 [2024-11-20 14:01:23.455918] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:40:26.183 [2024-11-20 14:01:23.456050] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:40:26.183 [2024-11-20 14:01:23.456094] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:40:26.183 [2024-11-20 14:01:23.456123] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:40:26.183 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:40:26.184 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:40:26.184 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:40:26.184 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:40:26.184 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:40:26.184 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:40:26.184 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:26.184 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:26.184 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:40:26.507 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:26.507 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:40:26.507 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:40:26.507 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:40:26.507 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:40:26.507 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:40:26.507 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:40:26.507 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:40:26.507 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:40:26.507 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:40:26.507 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:40:26.507 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:40:26.507 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:26.507 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:26.507 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:40:26.507 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:26.507 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:40:26.507 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:40:26.507 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:40:26.507 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:40:26.507 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:40:26.507 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:26.507 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:26.507 [2024-11-20 14:01:23.586512] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:40:26.507 [2024-11-20 14:01:23.586573] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:40:26.507 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:26.507 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:40:26.507 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:40:26.507 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:40:26.507 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:40:26.507 [2024-11-20 14:01:23.592516] bdev_nvme.c:7266:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:40:26.507 [2024-11-20 14:01:23.592538] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:40:26.507 [2024-11-20 14:01:23.592624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:40:26.507 [2024-11-20 14:01:23.592688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:26.507 [2024-11-20 14:01:23.592789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:40:26.507 [2024-11-20 14:01:23.592830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:26.507 [2024-11-20 14:01:23.592877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:40:26.507 [2024-11-20 14:01:23.592915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:40:26.507 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:26.507 [2024-11-20 14:01:23.592975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:40:26.507 [2024-11-20 14:01:23.592983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:26.507 [2024-11-20 14:01:23.592990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244a230 is same with the state(6) to be set 00:40:26.507 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:40:26.507 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:40:26.507 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:40:26.507 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:26.507 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:40:26.507 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:26.507 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:40:26.507 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:26.507 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:26.507 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:40:26.507 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:40:26.507 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:40:26.507 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:40:26.507 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:40:26.507 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:40:26.507 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:40:26.507 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:40:26.507 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:40:26.507 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:26.507 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:40:26.507 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:26.507 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:40:26.507 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:26.507 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:40:26.507 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:40:26.507 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:40:26.507 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:40:26.507 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:40:26.507 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:40:26.507 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:40:26.507 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:40:26.507 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:40:26.507 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:26.507 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:26.507 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:40:26.507 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:40:26.507 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:40:26.507 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:26.508 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:40:26.508 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:40:26.508 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:40:26.508 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:40:26.508 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:40:26.508 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:40:26.508 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:40:26.508 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:40:26.508 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:40:26.508 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:40:26.508 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:40:26.508 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:40:26.508 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:26.508 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:26.508 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:26.508 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:40:26.508 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:40:26.508 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:40:26.508 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:40:26.508 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:40:26.508 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:26.508 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:26.508 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:26.508 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:40:26.508 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:40:26.508 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:40:26.508 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:40:26.508 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:40:26.508 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:40:26.508 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:40:26.508 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:40:26.508 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:40:26.508 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:26.508 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:26.508 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:40:26.767 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:26.767 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:40:26.767 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:40:26.767 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:40:26.767 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:40:26.767 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:40:26.767 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:40:26.767 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:40:26.767 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:40:26.767 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:40:26.767 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:26.767 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:26.767 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:40:26.767 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:40:26.767 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:40:26.767 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:26.767 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:40:26.767 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:40:26.767 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:40:26.767 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:40:26.767 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:40:26.767 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:40:26.767 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:40:26.767 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:40:26.767 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:40:26.767 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:40:26.767 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:40:26.767 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:40:26.767 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:26.767 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:26.767 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:26.767 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:40:26.767 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:40:26.767 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:40:26.767 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:40:26.767 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:40:26.767 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:26.767 14:01:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:27.704 [2024-11-20 14:01:24.990069] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:40:27.704 [2024-11-20 14:01:24.990149] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:40:27.704 [2024-11-20 14:01:24.990185] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:40:27.704 [2024-11-20 14:01:24.996120] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:40:27.963 [2024-11-20 14:01:25.054345] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:40:27.963 [2024-11-20 14:01:25.055045] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x247a770:1 started. 00:40:27.963 [2024-11-20 14:01:25.056471] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:40:27.963 [2024-11-20 14:01:25.056551] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:40:27.963 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:27.963 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:40:27.963 [2024-11-20 14:01:25.058921] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x247a770 was disconnected and freed. delete nvme_qpair. 00:40:27.963 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:40:27.963 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:40:27.963 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:40:27.963 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:27.963 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:40:27.963 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:27.963 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:40:27.963 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:27.963 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:27.963 request: 00:40:27.963 { 00:40:27.963 "name": "nvme", 00:40:27.963 "trtype": "tcp", 00:40:27.963 "traddr": "10.0.0.3", 00:40:27.963 "adrfam": "ipv4", 00:40:27.963 "trsvcid": "8009", 00:40:27.963 "hostnqn": "nqn.2021-12.io.spdk:test", 00:40:27.963 "wait_for_attach": true, 00:40:27.963 "method": "bdev_nvme_start_discovery", 00:40:27.963 "req_id": 1 00:40:27.963 } 00:40:27.963 Got JSON-RPC error response 00:40:27.963 response: 00:40:27.963 { 00:40:27.963 "code": -17, 00:40:27.963 "message": "File exists" 00:40:27.963 } 00:40:27.963 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:40:27.963 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:40:27.963 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:27.963 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:27.963 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:27.963 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:40:27.963 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:40:27.964 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:40:27.964 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:27.964 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:27.964 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:40:27.964 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:40:27.964 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:27.964 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:40:27.964 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:40:27.964 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:40:27.964 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:40:27.964 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:27.964 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:27.964 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:40:27.964 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:40:27.964 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:27.964 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:40:27.964 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:40:27.964 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:40:27.964 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:40:27.964 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:40:27.964 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:27.964 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:40:27.964 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:27.964 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:40:27.964 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:27.964 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:27.964 request: 00:40:27.964 { 00:40:27.964 "name": "nvme_second", 00:40:27.964 "trtype": "tcp", 00:40:27.964 "traddr": "10.0.0.3", 00:40:27.964 "adrfam": "ipv4", 00:40:27.964 "trsvcid": "8009", 00:40:27.964 "hostnqn": "nqn.2021-12.io.spdk:test", 00:40:27.964 "wait_for_attach": true, 00:40:27.964 "method": "bdev_nvme_start_discovery", 00:40:27.964 "req_id": 1 00:40:27.964 } 00:40:27.964 Got JSON-RPC error response 00:40:27.964 response: 00:40:27.964 { 00:40:27.964 "code": -17, 00:40:27.964 "message": "File exists" 00:40:27.964 } 00:40:27.964 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:40:27.964 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:40:27.964 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:27.964 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:27.964 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:27.964 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:40:27.964 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:40:27.964 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:40:27.964 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:40:27.964 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:40:27.964 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:27.964 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:27.964 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:27.964 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:40:27.964 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:40:27.964 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:40:27.964 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:40:27.964 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:27.964 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:27.964 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:40:27.964 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:40:28.222 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:28.222 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:40:28.222 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:40:28.222 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:40:28.222 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:40:28.222 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:40:28.222 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:28.222 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:40:28.222 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:28.223 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:40:28.223 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:28.223 14:01:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:29.160 [2024-11-20 14:01:26.326795] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:40:29.160 [2024-11-20 14:01:26.326917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2445bd0 with addr=10.0.0.3, port=8010 00:40:29.160 [2024-11-20 14:01:26.326977] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:40:29.160 [2024-11-20 14:01:26.326999] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:40:29.160 [2024-11-20 14:01:26.327019] bdev_nvme.c:7547:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:40:30.097 [2024-11-20 14:01:27.324854] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:40:30.097 [2024-11-20 14:01:27.324965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2445bd0 with addr=10.0.0.3, port=8010 00:40:30.097 [2024-11-20 14:01:27.325024] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:40:30.097 [2024-11-20 14:01:27.325056] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:40:30.097 [2024-11-20 14:01:27.325073] bdev_nvme.c:7547:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:40:31.033 [2024-11-20 14:01:28.322815] bdev_nvme.c:7522:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:40:31.033 request: 00:40:31.033 { 00:40:31.033 "name": "nvme_second", 00:40:31.033 "trtype": "tcp", 00:40:31.033 "traddr": "10.0.0.3", 00:40:31.033 "adrfam": "ipv4", 00:40:31.033 "trsvcid": "8010", 00:40:31.033 "hostnqn": "nqn.2021-12.io.spdk:test", 00:40:31.033 "wait_for_attach": false, 00:40:31.033 "attach_timeout_ms": 3000, 00:40:31.033 "method": "bdev_nvme_start_discovery", 00:40:31.033 "req_id": 1 00:40:31.033 } 00:40:31.033 Got JSON-RPC error response 00:40:31.033 response: 00:40:31.033 { 00:40:31.033 "code": -110, 00:40:31.033 "message": "Connection timed out" 00:40:31.033 } 00:40:31.033 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:40:31.033 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:40:31.033 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:31.033 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:31.033 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:31.033 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:40:31.033 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:40:31.033 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:40:31.033 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:40:31.033 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:31.033 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:40:31.033 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:31.033 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:31.292 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:40:31.292 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:40:31.292 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 75957 00:40:31.292 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:40:31.292 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:31.292 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:40:31.292 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:31.292 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:40:31.292 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:31.292 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:31.292 rmmod nvme_tcp 00:40:31.292 rmmod nvme_fabrics 00:40:31.292 rmmod nvme_keyring 00:40:31.292 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:31.292 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:40:31.292 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:40:31.292 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 75925 ']' 00:40:31.292 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 75925 00:40:31.292 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 75925 ']' 00:40:31.292 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 75925 00:40:31.292 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:40:31.292 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:31.292 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75925 00:40:31.292 killing process with pid 75925 00:40:31.292 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:31.292 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:31.292 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75925' 00:40:31.292 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 75925 00:40:31.292 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 75925 00:40:31.552 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:31.552 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:31.552 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:31.552 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:40:31.552 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:40:31.552 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:31.552 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:40:31.552 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:31.552 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:40:31.552 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:40:31.552 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:40:31.552 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:40:31.552 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:40:31.811 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:40:31.811 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:40:31.811 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:40:31.811 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:40:31.811 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:40:31.811 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:40:31.811 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:40:31.811 14:01:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:40:31.812 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:40:31.812 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:40:31.812 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:31.812 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:31.812 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:31.812 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:40:31.812 00:40:31.812 real 0m10.461s 00:40:31.812 user 0m19.037s 00:40:31.812 sys 0m2.422s 00:40:31.812 ************************************ 00:40:31.812 END TEST nvmf_host_discovery 00:40:31.812 ************************************ 00:40:31.812 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:31.812 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:32.071 14:01:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:40:32.071 14:01:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:40:32.071 14:01:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:32.071 14:01:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:40:32.071 ************************************ 00:40:32.071 START TEST nvmf_host_multipath_status 00:40:32.071 ************************************ 00:40:32.071 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:40:32.071 * Looking for test storage... 00:40:32.071 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:40:32.071 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:32.071 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:40:32.071 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:32.071 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:32.071 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:32.071 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:32.071 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:32.071 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:40:32.071 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:40:32.071 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:40:32.071 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:40:32.071 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:40:32.071 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:40:32.071 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:40:32.071 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:32.071 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:40:32.071 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:40:32.071 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:32.071 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:32.071 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:40:32.071 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:40:32.071 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:32.071 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:40:32.071 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:40:32.071 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:40:32.071 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:40:32.071 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:32.071 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:40:32.331 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:40:32.331 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:32.331 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:32.331 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:40:32.331 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:32.331 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:32.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:32.331 --rc genhtml_branch_coverage=1 00:40:32.331 --rc genhtml_function_coverage=1 00:40:32.331 --rc genhtml_legend=1 00:40:32.331 --rc geninfo_all_blocks=1 00:40:32.331 --rc geninfo_unexecuted_blocks=1 00:40:32.331 00:40:32.331 ' 00:40:32.331 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:32.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:32.331 --rc genhtml_branch_coverage=1 00:40:32.331 --rc genhtml_function_coverage=1 00:40:32.331 --rc genhtml_legend=1 00:40:32.331 --rc geninfo_all_blocks=1 00:40:32.331 --rc geninfo_unexecuted_blocks=1 00:40:32.331 00:40:32.331 ' 00:40:32.331 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:32.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:32.331 --rc genhtml_branch_coverage=1 00:40:32.331 --rc genhtml_function_coverage=1 00:40:32.331 --rc genhtml_legend=1 00:40:32.331 --rc geninfo_all_blocks=1 00:40:32.331 --rc geninfo_unexecuted_blocks=1 00:40:32.331 00:40:32.331 ' 00:40:32.331 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:32.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:32.331 --rc genhtml_branch_coverage=1 00:40:32.331 --rc genhtml_function_coverage=1 00:40:32.331 --rc genhtml_legend=1 00:40:32.331 --rc geninfo_all_blocks=1 00:40:32.331 --rc geninfo_unexecuted_blocks=1 00:40:32.331 00:40:32.331 ' 00:40:32.331 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:40:32.331 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:40:32.331 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:32.331 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:32.331 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:32.331 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:32.331 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:32.331 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:32.331 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:32.331 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:32.331 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:32.331 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:32.331 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:40:32.331 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=105ec898-1662-46bd-85be-b241e399edb9 00:40:32.331 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:32.331 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:32.331 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:40:32.331 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:32.331 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:40:32.331 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:40:32.331 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:32.331 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:32.331 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:32.331 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:32.331 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:32.331 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:32.331 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:40:32.331 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:32.331 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:40:32.331 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:32.331 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:32.331 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:32.331 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:32.331 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:32.331 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:32.331 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:32.331 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:32.331 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:32.331 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:32.331 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:40:32.331 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:40:32.331 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:32.331 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:40:32.331 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:40:32.331 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:40:32.331 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:40:32.331 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:32.331 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:32.331 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:32.331 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:32.331 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:32.331 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:32.331 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:32.331 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:32.331 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:40:32.331 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:40:32.332 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:40:32.332 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:40:32.332 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:40:32.332 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:40:32.332 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:32.332 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:40:32.332 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:40:32.332 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:40:32.332 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:32.332 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:40:32.332 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:40:32.332 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:40:32.332 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:40:32.332 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:40:32.332 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:40:32.332 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:32.332 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:40:32.332 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:40:32.332 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:40:32.332 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:40:32.332 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:40:32.332 Cannot find device "nvmf_init_br" 00:40:32.332 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:40:32.332 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:40:32.332 Cannot find device "nvmf_init_br2" 00:40:32.332 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:40:32.332 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:40:32.332 Cannot find device "nvmf_tgt_br" 00:40:32.332 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:40:32.332 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:40:32.332 Cannot find device "nvmf_tgt_br2" 00:40:32.332 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:40:32.332 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:40:32.332 Cannot find device "nvmf_init_br" 00:40:32.332 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:40:32.332 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:40:32.332 Cannot find device "nvmf_init_br2" 00:40:32.332 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:40:32.332 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:40:32.332 Cannot find device "nvmf_tgt_br" 00:40:32.332 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:40:32.332 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:40:32.332 Cannot find device "nvmf_tgt_br2" 00:40:32.332 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:40:32.332 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:40:32.332 Cannot find device "nvmf_br" 00:40:32.332 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:40:32.332 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:40:32.332 Cannot find device "nvmf_init_if" 00:40:32.332 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:40:32.332 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:40:32.332 Cannot find device "nvmf_init_if2" 00:40:32.332 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:40:32.332 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:40:32.332 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:40:32.332 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:40:32.332 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:40:32.332 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:40:32.332 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:40:32.332 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:40:32.332 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:40:32.592 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:40:32.592 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:40:32.592 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:40:32.592 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:40:32.592 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:40:32.592 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:40:32.592 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:40:32.592 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:40:32.592 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:40:32.592 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:40:32.592 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:40:32.592 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:40:32.592 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:40:32.592 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:40:32.592 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:40:32.592 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:40:32.592 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:40:32.592 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:40:32.592 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:40:32.592 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:40:32.592 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:40:32.592 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:40:32.592 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:40:32.592 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:40:32.592 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:40:32.592 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:40:32.592 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:40:32.592 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:40:32.592 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:40:32.592 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:40:32.592 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:40:32.592 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:40:32.592 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.123 ms 00:40:32.592 00:40:32.592 --- 10.0.0.3 ping statistics --- 00:40:32.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:32.592 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:40:32.592 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:40:32.592 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:40:32.592 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.095 ms 00:40:32.592 00:40:32.592 --- 10.0.0.4 ping statistics --- 00:40:32.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:32.592 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:40:32.592 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:40:32.592 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:32.592 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:40:32.592 00:40:32.592 --- 10.0.0.1 ping statistics --- 00:40:32.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:32.592 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:40:32.592 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:40:32.592 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:32.592 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.106 ms 00:40:32.592 00:40:32.592 --- 10.0.0.2 ping statistics --- 00:40:32.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:32.592 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:40:32.592 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:32.593 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:40:32.593 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:32.593 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:32.593 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:32.593 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:32.593 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:32.593 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:32.593 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:32.851 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:40:32.852 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:32.852 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:32.852 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:40:32.852 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=76463 00:40:32.852 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:40:32.852 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 76463 00:40:32.852 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 76463 ']' 00:40:32.852 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:32.852 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:32.852 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:32.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:32.852 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:32.852 14:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:40:32.852 [2024-11-20 14:01:30.007612] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:40:32.852 [2024-11-20 14:01:30.007752] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:32.852 [2024-11-20 14:01:30.159876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:33.110 [2024-11-20 14:01:30.224602] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:33.110 [2024-11-20 14:01:30.224729] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:33.110 [2024-11-20 14:01:30.224771] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:33.110 [2024-11-20 14:01:30.224801] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:33.110 [2024-11-20 14:01:30.224820] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:33.110 [2024-11-20 14:01:30.225670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:33.110 [2024-11-20 14:01:30.225670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:33.110 [2024-11-20 14:01:30.271182] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:40:33.679 14:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:33.679 14:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:40:33.679 14:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:33.679 14:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:33.679 14:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:40:33.679 14:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:33.679 14:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=76463 00:40:33.679 14:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:40:33.938 [2024-11-20 14:01:31.147741] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:33.938 14:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:40:34.198 Malloc0 00:40:34.198 14:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:40:34.457 14:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:34.716 14:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:40:34.975 [2024-11-20 14:01:32.081679] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:40:34.975 14:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:40:35.238 [2024-11-20 14:01:32.305349] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:40:35.238 14:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:40:35.238 14:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=76513 00:40:35.238 14:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:40:35.238 14:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 76513 /var/tmp/bdevperf.sock 00:40:35.238 14:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 76513 ']' 00:40:35.238 14:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:40:35.238 14:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:35.238 14:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:40:35.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:40:35.238 14:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:35.238 14:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:40:36.176 14:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:36.176 14:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:40:36.176 14:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:40:36.176 14:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:40:36.744 Nvme0n1 00:40:36.744 14:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:40:37.003 Nvme0n1 00:40:37.003 14:01:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:40:37.003 14:01:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:40:38.909 14:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:40:38.909 14:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:40:39.168 14:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:40:39.426 14:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:40:40.362 14:01:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:40:40.362 14:01:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:40:40.362 14:01:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:40.362 14:01:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:40:40.621 14:01:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:40.622 14:01:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:40:40.622 14:01:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:40:40.622 14:01:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:40.880 14:01:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:40.880 14:01:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:40:40.880 14:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:40:40.880 14:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:41.139 14:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:41.139 14:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:40:41.139 14:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:41.139 14:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:40:41.139 14:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:41.139 14:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:40:41.139 14:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:41.139 14:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:40:41.397 14:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:41.397 14:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:40:41.397 14:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:41.397 14:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:40:41.654 14:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:41.654 14:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:40:41.654 14:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:40:41.912 14:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:40:42.171 14:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:40:43.175 14:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:40:43.175 14:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:40:43.175 14:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:43.175 14:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:40:43.175 14:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:43.175 14:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:40:43.175 14:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:43.175 14:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:40:43.434 14:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:43.434 14:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:40:43.434 14:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:43.434 14:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:40:43.693 14:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:43.693 14:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:40:43.693 14:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:43.693 14:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:40:43.952 14:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:43.952 14:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:40:43.952 14:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:43.952 14:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:40:43.952 14:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:43.952 14:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:40:43.952 14:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:43.952 14:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:40:44.211 14:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:44.211 14:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:40:44.211 14:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:40:44.468 14:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:40:44.725 14:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:40:45.663 14:01:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:40:45.663 14:01:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:40:45.663 14:01:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:45.663 14:01:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:40:45.921 14:01:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:45.921 14:01:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:40:45.921 14:01:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:40:45.921 14:01:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:45.921 14:01:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:45.921 14:01:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:40:45.921 14:01:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:40:45.921 14:01:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:46.181 14:01:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:46.181 14:01:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:40:46.181 14:01:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:46.181 14:01:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:40:46.440 14:01:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:46.440 14:01:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:40:46.441 14:01:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:46.441 14:01:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:40:46.699 14:01:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:46.699 14:01:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:40:46.699 14:01:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:46.699 14:01:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:40:46.957 14:01:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:46.958 14:01:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:40:46.958 14:01:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:40:47.217 14:01:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:40:47.478 14:01:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:40:48.431 14:01:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:40:48.431 14:01:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:40:48.431 14:01:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:48.431 14:01:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:40:48.702 14:01:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:48.702 14:01:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:40:48.702 14:01:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:48.702 14:01:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:40:48.963 14:01:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:48.963 14:01:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:40:48.963 14:01:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:48.963 14:01:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:40:49.222 14:01:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:49.222 14:01:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:40:49.222 14:01:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:49.222 14:01:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:40:49.487 14:01:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:49.487 14:01:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:40:49.487 14:01:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:49.487 14:01:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:40:49.749 14:01:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:49.749 14:01:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:40:49.749 14:01:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:49.749 14:01:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:40:50.013 14:01:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:50.013 14:01:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:40:50.013 14:01:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:40:50.274 14:01:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:40:50.274 14:01:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:40:51.655 14:01:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:40:51.655 14:01:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:40:51.655 14:01:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:51.655 14:01:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:40:51.655 14:01:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:51.655 14:01:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:40:51.655 14:01:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:51.655 14:01:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:40:51.915 14:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:51.915 14:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:40:51.915 14:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:51.915 14:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:40:52.173 14:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:52.173 14:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:40:52.173 14:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:52.173 14:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:40:52.433 14:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:52.433 14:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:40:52.433 14:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:52.433 14:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:40:52.693 14:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:52.693 14:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:40:52.693 14:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:52.693 14:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:40:52.693 14:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:52.693 14:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:40:52.693 14:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:40:52.953 14:01:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:40:53.212 14:01:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:40:54.149 14:01:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:40:54.149 14:01:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:40:54.149 14:01:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:54.149 14:01:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:40:54.409 14:01:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:54.410 14:01:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:40:54.410 14:01:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:54.410 14:01:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:40:54.669 14:01:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:54.669 14:01:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:40:54.669 14:01:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:54.669 14:01:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:40:54.929 14:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:54.929 14:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:40:54.929 14:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:54.929 14:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:40:55.188 14:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:55.188 14:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:40:55.188 14:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:55.188 14:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:40:55.447 14:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:55.447 14:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:40:55.447 14:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:55.447 14:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:40:55.707 14:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:55.707 14:01:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:40:55.966 14:01:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:40:55.966 14:01:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:40:56.225 14:01:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:40:56.485 14:01:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:40:57.421 14:01:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:40:57.421 14:01:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:40:57.421 14:01:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:57.421 14:01:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:40:57.681 14:01:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:57.681 14:01:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:40:57.681 14:01:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:40:57.681 14:01:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:57.940 14:01:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:57.940 14:01:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:40:57.940 14:01:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:57.940 14:01:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:40:58.200 14:01:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:58.200 14:01:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:40:58.200 14:01:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:40:58.200 14:01:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:58.200 14:01:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:58.200 14:01:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:40:58.200 14:01:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:58.200 14:01:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:40:58.459 14:01:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:58.459 14:01:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:40:58.459 14:01:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:58.459 14:01:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:40:58.721 14:01:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:58.721 14:01:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:40:58.721 14:01:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:40:58.983 14:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:40:59.242 14:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:41:00.181 14:01:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:41:00.181 14:01:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:41:00.181 14:01:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:41:00.181 14:01:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:41:00.439 14:01:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:41:00.440 14:01:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:41:00.440 14:01:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:41:00.440 14:01:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:41:00.698 14:01:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:41:00.698 14:01:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:41:00.698 14:01:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:41:00.698 14:01:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:41:00.957 14:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:41:00.957 14:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:41:00.957 14:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:41:00.958 14:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:41:00.958 14:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:41:00.958 14:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:41:00.958 14:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:41:00.958 14:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:41:01.216 14:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:41:01.216 14:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:41:01.216 14:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:41:01.216 14:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:41:01.474 14:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:41:01.474 14:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:41:01.474 14:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:41:01.733 14:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:41:01.992 14:01:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:41:02.929 14:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:41:02.929 14:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:41:02.929 14:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:41:02.929 14:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:41:03.188 14:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:41:03.188 14:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:41:03.188 14:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:41:03.188 14:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:41:03.449 14:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:41:03.449 14:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:41:03.449 14:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:41:03.449 14:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:41:03.449 14:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:41:03.449 14:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:41:03.449 14:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:41:03.449 14:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:41:03.707 14:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:41:03.707 14:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:41:03.707 14:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:41:03.707 14:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:41:03.965 14:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:41:03.965 14:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:41:03.965 14:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:41:03.965 14:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:41:04.224 14:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:41:04.224 14:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:41:04.224 14:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:41:04.482 14:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:41:04.482 14:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:41:05.868 14:02:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:41:05.868 14:02:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:41:05.868 14:02:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:41:05.868 14:02:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:41:05.868 14:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:41:05.868 14:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:41:05.868 14:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:41:05.868 14:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:41:06.127 14:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:41:06.127 14:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:41:06.127 14:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:41:06.127 14:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:41:06.127 14:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:41:06.127 14:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:41:06.127 14:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:41:06.127 14:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:41:06.386 14:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:41:06.386 14:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:41:06.386 14:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:41:06.386 14:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:41:06.646 14:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:41:06.646 14:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:41:06.646 14:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:41:06.646 14:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:41:06.905 14:02:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:41:06.905 14:02:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 76513 00:41:06.905 14:02:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 76513 ']' 00:41:06.905 14:02:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 76513 00:41:06.905 14:02:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:41:06.905 14:02:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:06.905 14:02:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76513 00:41:06.905 killing process with pid 76513 00:41:06.905 14:02:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:41:06.905 14:02:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:41:06.905 14:02:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76513' 00:41:06.905 14:02:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 76513 00:41:06.905 14:02:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 76513 00:41:06.905 { 00:41:06.905 "results": [ 00:41:06.905 { 00:41:06.905 "job": "Nvme0n1", 00:41:06.905 "core_mask": "0x4", 00:41:06.905 "workload": "verify", 00:41:06.905 "status": "terminated", 00:41:06.905 "verify_range": { 00:41:06.905 "start": 0, 00:41:06.905 "length": 16384 00:41:06.905 }, 00:41:06.905 "queue_depth": 128, 00:41:06.905 "io_size": 4096, 00:41:06.905 "runtime": 29.94966, 00:41:06.905 "iops": 9642.546860298247, 00:41:06.905 "mibps": 37.66619867304003, 00:41:06.905 "io_failed": 0, 00:41:06.905 "io_timeout": 0, 00:41:06.905 "avg_latency_us": 13248.948069173004, 00:41:06.905 "min_latency_us": 133.2541484716157, 00:41:06.905 "max_latency_us": 4014809.7676855894 00:41:06.905 } 00:41:06.905 ], 00:41:06.905 "core_count": 1 00:41:06.905 } 00:41:07.168 14:02:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 76513 00:41:07.168 14:02:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:41:07.168 [2024-11-20 14:01:32.374839] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:41:07.168 [2024-11-20 14:01:32.374948] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76513 ] 00:41:07.168 [2024-11-20 14:01:32.522208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:07.168 [2024-11-20 14:01:32.574987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:41:07.168 [2024-11-20 14:01:32.624643] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:41:07.168 Running I/O for 90 seconds... 00:41:07.168 9917.00 IOPS, 38.74 MiB/s [2024-11-20T14:02:04.491Z] 10087.50 IOPS, 39.40 MiB/s [2024-11-20T14:02:04.491Z] 10141.00 IOPS, 39.61 MiB/s [2024-11-20T14:02:04.491Z] 10385.75 IOPS, 40.57 MiB/s [2024-11-20T14:02:04.491Z] 10480.40 IOPS, 40.94 MiB/s [2024-11-20T14:02:04.491Z] 10341.67 IOPS, 40.40 MiB/s [2024-11-20T14:02:04.491Z] 10196.86 IOPS, 39.83 MiB/s [2024-11-20T14:02:04.491Z] 10288.12 IOPS, 40.19 MiB/s [2024-11-20T14:02:04.491Z] 10484.56 IOPS, 40.96 MiB/s [2024-11-20T14:02:04.491Z] 10469.30 IOPS, 40.90 MiB/s [2024-11-20T14:02:04.491Z] 10427.73 IOPS, 40.73 MiB/s [2024-11-20T14:02:04.491Z] 10403.42 IOPS, 40.64 MiB/s [2024-11-20T14:02:04.491Z] 10352.54 IOPS, 40.44 MiB/s [2024-11-20T14:02:04.491Z] [2024-11-20 14:01:47.387487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:45560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.168 [2024-11-20 14:01:47.387556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:41:07.168 [2024-11-20 14:01:47.387606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:45568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.168 [2024-11-20 14:01:47.387617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:41:07.168 [2024-11-20 14:01:47.387632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:45576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.168 [2024-11-20 14:01:47.387642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:41:07.168 [2024-11-20 14:01:47.387656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:45584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.168 [2024-11-20 14:01:47.387664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:41:07.168 [2024-11-20 14:01:47.387678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:45592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.168 [2024-11-20 14:01:47.387687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:41:07.168 [2024-11-20 14:01:47.387701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:45600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.168 [2024-11-20 14:01:47.387720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:41:07.168 [2024-11-20 14:01:47.387734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:45608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.168 [2024-11-20 14:01:47.387745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:41:07.168 [2024-11-20 14:01:47.387759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:45616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.168 [2024-11-20 14:01:47.387768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:41:07.168 [2024-11-20 14:01:47.387781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:45624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.168 [2024-11-20 14:01:47.387790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:41:07.168 [2024-11-20 14:01:47.387832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:45632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.168 [2024-11-20 14:01:47.387841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:41:07.168 [2024-11-20 14:01:47.387855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:45640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.168 [2024-11-20 14:01:47.387864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:41:07.168 [2024-11-20 14:01:47.387878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:45648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.168 [2024-11-20 14:01:47.387887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:41:07.168 [2024-11-20 14:01:47.387900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:45656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.168 [2024-11-20 14:01:47.387909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:41:07.168 [2024-11-20 14:01:47.387922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:45664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.168 [2024-11-20 14:01:47.387931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:41:07.168 [2024-11-20 14:01:47.387944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:45672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.168 [2024-11-20 14:01:47.387953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:41:07.168 [2024-11-20 14:01:47.387966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:45680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.168 [2024-11-20 14:01:47.387974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:41:07.168 [2024-11-20 14:01:47.387988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:45176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.169 [2024-11-20 14:01:47.387999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:41:07.169 [2024-11-20 14:01:47.388013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:45184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.169 [2024-11-20 14:01:47.388021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:41:07.169 [2024-11-20 14:01:47.388035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:45192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.169 [2024-11-20 14:01:47.388044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:41:07.169 [2024-11-20 14:01:47.388058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:45200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.169 [2024-11-20 14:01:47.388067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:41:07.169 [2024-11-20 14:01:47.388081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:45208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.169 [2024-11-20 14:01:47.388089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:41:07.169 [2024-11-20 14:01:47.388110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:45216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.169 [2024-11-20 14:01:47.388120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:41:07.169 [2024-11-20 14:01:47.388134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:45224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.169 [2024-11-20 14:01:47.388142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:41:07.169 [2024-11-20 14:01:47.388157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:45232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.169 [2024-11-20 14:01:47.388166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:41:07.169 [2024-11-20 14:01:47.388179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:45240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.169 [2024-11-20 14:01:47.388188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:41:07.169 [2024-11-20 14:01:47.388202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:45248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.169 [2024-11-20 14:01:47.388221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:07.169 [2024-11-20 14:01:47.388236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:45256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.169 [2024-11-20 14:01:47.388245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:41:07.169 [2024-11-20 14:01:47.388259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:45264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.169 [2024-11-20 14:01:47.388267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:41:07.169 [2024-11-20 14:01:47.388281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:45272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.169 [2024-11-20 14:01:47.388289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:41:07.169 [2024-11-20 14:01:47.388303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:45280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.169 [2024-11-20 14:01:47.388312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:41:07.169 [2024-11-20 14:01:47.388325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:45288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.169 [2024-11-20 14:01:47.388336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:41:07.169 [2024-11-20 14:01:47.388349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:45296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.169 [2024-11-20 14:01:47.388358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:41:07.169 [2024-11-20 14:01:47.388374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:45688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.169 [2024-11-20 14:01:47.388386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:41:07.169 [2024-11-20 14:01:47.388400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:45696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.169 [2024-11-20 14:01:47.388415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:41:07.169 [2024-11-20 14:01:47.388428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:45704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.169 [2024-11-20 14:01:47.388438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:41:07.169 [2024-11-20 14:01:47.388452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:45712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.169 [2024-11-20 14:01:47.388461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:41:07.169 [2024-11-20 14:01:47.388474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:45720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.169 [2024-11-20 14:01:47.388483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:41:07.169 [2024-11-20 14:01:47.388497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.169 [2024-11-20 14:01:47.388505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:41:07.169 [2024-11-20 14:01:47.388518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:45736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.169 [2024-11-20 14:01:47.388526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:41:07.169 [2024-11-20 14:01:47.388540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:45744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.169 [2024-11-20 14:01:47.388548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:41:07.169 [2024-11-20 14:01:47.388561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:45752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.169 [2024-11-20 14:01:47.388569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:41:07.169 [2024-11-20 14:01:47.388582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:45760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.169 [2024-11-20 14:01:47.388590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:41:07.169 [2024-11-20 14:01:47.388603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:45768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.169 [2024-11-20 14:01:47.388612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:41:07.169 [2024-11-20 14:01:47.388625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:45776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.169 [2024-11-20 14:01:47.388634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:41:07.169 [2024-11-20 14:01:47.388647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:45784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.169 [2024-11-20 14:01:47.388660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:41:07.169 [2024-11-20 14:01:47.388673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:45792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.169 [2024-11-20 14:01:47.388687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:41:07.169 [2024-11-20 14:01:47.388700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.169 [2024-11-20 14:01:47.388709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:41:07.169 [2024-11-20 14:01:47.388739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:45808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.169 [2024-11-20 14:01:47.388748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:41:07.169 [2024-11-20 14:01:47.388762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:45304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.169 [2024-11-20 14:01:47.388773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:41:07.169 [2024-11-20 14:01:47.388789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:45312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.169 [2024-11-20 14:01:47.388798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:41:07.169 [2024-11-20 14:01:47.388812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:45320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.169 [2024-11-20 14:01:47.388820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:41:07.169 [2024-11-20 14:01:47.388835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:45328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.170 [2024-11-20 14:01:47.388846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:41:07.170 [2024-11-20 14:01:47.388860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:45336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.170 [2024-11-20 14:01:47.388871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:41:07.170 [2024-11-20 14:01:47.388886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:45344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.170 [2024-11-20 14:01:47.388898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:41:07.170 [2024-11-20 14:01:47.388914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:45352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.170 [2024-11-20 14:01:47.388927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:41:07.170 [2024-11-20 14:01:47.388944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:45360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.170 [2024-11-20 14:01:47.388955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:41:07.170 [2024-11-20 14:01:47.388971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:45368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.170 [2024-11-20 14:01:47.389000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:41:07.170 [2024-11-20 14:01:47.389014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:45376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.170 [2024-11-20 14:01:47.389024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:07.170 [2024-11-20 14:01:47.389046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:45384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.170 [2024-11-20 14:01:47.389056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:41:07.170 [2024-11-20 14:01:47.389070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:45392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.170 [2024-11-20 14:01:47.389080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:41:07.170 [2024-11-20 14:01:47.389095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:45400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.170 [2024-11-20 14:01:47.389104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:41:07.170 [2024-11-20 14:01:47.389119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:45408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.170 [2024-11-20 14:01:47.389128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:41:07.170 [2024-11-20 14:01:47.389142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:45416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.170 [2024-11-20 14:01:47.389151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:41:07.170 [2024-11-20 14:01:47.389165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:45424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.170 [2024-11-20 14:01:47.389174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:41:07.170 [2024-11-20 14:01:47.389190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:45816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.170 [2024-11-20 14:01:47.389200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:41:07.170 [2024-11-20 14:01:47.389215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:45824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.170 [2024-11-20 14:01:47.389225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:41:07.170 [2024-11-20 14:01:47.389238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:45832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.170 [2024-11-20 14:01:47.389249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:41:07.170 [2024-11-20 14:01:47.389263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:45840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.170 [2024-11-20 14:01:47.389272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:41:07.170 [2024-11-20 14:01:47.389285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:45848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.170 [2024-11-20 14:01:47.389295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:41:07.170 [2024-11-20 14:01:47.389308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.170 [2024-11-20 14:01:47.389316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:41:07.170 [2024-11-20 14:01:47.389337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:45864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.170 [2024-11-20 14:01:47.389348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:41:07.170 [2024-11-20 14:01:47.389362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:45872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.170 [2024-11-20 14:01:47.389372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:41:07.170 [2024-11-20 14:01:47.389387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:45432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.170 [2024-11-20 14:01:47.389395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:41:07.170 [2024-11-20 14:01:47.389409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:45440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.170 [2024-11-20 14:01:47.389418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:41:07.170 [2024-11-20 14:01:47.389432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:45448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.170 [2024-11-20 14:01:47.389440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:41:07.170 [2024-11-20 14:01:47.389455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:45456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.170 [2024-11-20 14:01:47.389465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:41:07.170 [2024-11-20 14:01:47.389479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:45464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.170 [2024-11-20 14:01:47.389489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:41:07.170 [2024-11-20 14:01:47.389503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:45472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.170 [2024-11-20 14:01:47.389511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:41:07.170 [2024-11-20 14:01:47.389526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:45480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.170 [2024-11-20 14:01:47.389534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:41:07.170 [2024-11-20 14:01:47.389548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:45488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.170 [2024-11-20 14:01:47.389557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:41:07.170 [2024-11-20 14:01:47.389592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:45880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.170 [2024-11-20 14:01:47.389606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:41:07.170 [2024-11-20 14:01:47.389621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:45888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.170 [2024-11-20 14:01:47.389630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:41:07.170 [2024-11-20 14:01:47.389644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:45896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.170 [2024-11-20 14:01:47.389658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:41:07.170 [2024-11-20 14:01:47.389672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:45904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.170 [2024-11-20 14:01:47.389681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:41:07.170 [2024-11-20 14:01:47.389696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:45912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.170 [2024-11-20 14:01:47.389705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:41:07.170 [2024-11-20 14:01:47.389741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:45920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.170 [2024-11-20 14:01:47.389750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:41:07.170 [2024-11-20 14:01:47.389764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:45928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.170 [2024-11-20 14:01:47.389774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:41:07.170 [2024-11-20 14:01:47.389789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:45936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.170 [2024-11-20 14:01:47.389798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:41:07.170 [2024-11-20 14:01:47.389815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.170 [2024-11-20 14:01:47.389824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:07.171 [2024-11-20 14:01:47.389838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:45952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.171 [2024-11-20 14:01:47.389849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:07.171 [2024-11-20 14:01:47.389863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:45960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.171 [2024-11-20 14:01:47.389874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:41:07.171 [2024-11-20 14:01:47.389889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:45968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.171 [2024-11-20 14:01:47.389899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:41:07.171 [2024-11-20 14:01:47.389914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:45976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.171 [2024-11-20 14:01:47.389924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:41:07.171 [2024-11-20 14:01:47.389938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:45984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.171 [2024-11-20 14:01:47.389948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:41:07.171 [2024-11-20 14:01:47.389962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:45992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.171 [2024-11-20 14:01:47.389979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:41:07.171 [2024-11-20 14:01:47.389993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:46000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.171 [2024-11-20 14:01:47.390002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:41:07.171 [2024-11-20 14:01:47.390302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:46008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.171 [2024-11-20 14:01:47.390319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:41:07.171 [2024-11-20 14:01:47.390340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:46016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.171 [2024-11-20 14:01:47.390358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:41:07.171 [2024-11-20 14:01:47.390379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:46024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.171 [2024-11-20 14:01:47.390388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:41:07.171 [2024-11-20 14:01:47.390408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:46032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.171 [2024-11-20 14:01:47.390420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:41:07.171 [2024-11-20 14:01:47.390439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:46040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.171 [2024-11-20 14:01:47.390448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:41:07.171 [2024-11-20 14:01:47.390467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:46048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.171 [2024-11-20 14:01:47.390478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:41:07.171 [2024-11-20 14:01:47.390498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:46056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.171 [2024-11-20 14:01:47.390507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:41:07.171 [2024-11-20 14:01:47.390525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:46064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.171 [2024-11-20 14:01:47.390534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:41:07.171 [2024-11-20 14:01:47.390552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:45496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.171 [2024-11-20 14:01:47.390561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:41:07.171 [2024-11-20 14:01:47.390579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:45504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.171 [2024-11-20 14:01:47.390590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:41:07.171 [2024-11-20 14:01:47.390611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:45512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.171 [2024-11-20 14:01:47.390620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:41:07.171 [2024-11-20 14:01:47.390647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:45520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.171 [2024-11-20 14:01:47.390657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:41:07.171 [2024-11-20 14:01:47.390675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:45528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.171 [2024-11-20 14:01:47.390684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:41:07.171 [2024-11-20 14:01:47.390717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:45536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.171 [2024-11-20 14:01:47.390728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:41:07.171 [2024-11-20 14:01:47.390747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:45544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.171 [2024-11-20 14:01:47.390759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:41:07.171 [2024-11-20 14:01:47.390780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:45552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.171 [2024-11-20 14:01:47.390794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:41:07.171 [2024-11-20 14:01:47.390813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:46072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.171 [2024-11-20 14:01:47.390823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:41:07.171 [2024-11-20 14:01:47.390844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:46080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.171 [2024-11-20 14:01:47.390857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:41:07.171 [2024-11-20 14:01:47.390885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:46088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.171 [2024-11-20 14:01:47.390896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:41:07.171 [2024-11-20 14:01:47.390915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:46096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.171 [2024-11-20 14:01:47.390925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:41:07.171 [2024-11-20 14:01:47.390943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:46104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.171 [2024-11-20 14:01:47.390953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:41:07.171 [2024-11-20 14:01:47.390971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:46112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.171 [2024-11-20 14:01:47.390981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:41:07.171 [2024-11-20 14:01:47.391001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:46120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.171 [2024-11-20 14:01:47.391010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:41:07.171 [2024-11-20 14:01:47.391048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:46128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.171 [2024-11-20 14:01:47.391059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:41:07.171 [2024-11-20 14:01:47.391077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:46136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.171 [2024-11-20 14:01:47.391088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:41:07.171 [2024-11-20 14:01:47.391107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:46144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.171 [2024-11-20 14:01:47.391117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:07.171 [2024-11-20 14:01:47.391135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:46152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.171 [2024-11-20 14:01:47.391144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:41:07.171 [2024-11-20 14:01:47.391162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:46160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.171 [2024-11-20 14:01:47.391171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:41:07.171 [2024-11-20 14:01:47.391189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:46168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.171 [2024-11-20 14:01:47.391200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:41:07.171 [2024-11-20 14:01:47.391220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:46176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.171 [2024-11-20 14:01:47.391231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:41:07.171 [2024-11-20 14:01:47.391249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:46184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.171 [2024-11-20 14:01:47.391259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:41:07.172 [2024-11-20 14:01:47.391277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:46192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.172 [2024-11-20 14:01:47.391288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:41:07.172 9765.64 IOPS, 38.15 MiB/s [2024-11-20T14:02:04.495Z] 9114.60 IOPS, 35.60 MiB/s [2024-11-20T14:02:04.495Z] 8544.94 IOPS, 33.38 MiB/s [2024-11-20T14:02:04.495Z] 8042.29 IOPS, 31.42 MiB/s [2024-11-20T14:02:04.495Z] 8024.44 IOPS, 31.35 MiB/s [2024-11-20T14:02:04.495Z] 8131.37 IOPS, 31.76 MiB/s [2024-11-20T14:02:04.495Z] 8385.25 IOPS, 32.75 MiB/s [2024-11-20T14:02:04.495Z] 8650.57 IOPS, 33.79 MiB/s [2024-11-20T14:02:04.495Z] 8889.09 IOPS, 34.72 MiB/s [2024-11-20T14:02:04.495Z] 8950.35 IOPS, 34.96 MiB/s [2024-11-20T14:02:04.495Z] 8998.08 IOPS, 35.15 MiB/s [2024-11-20T14:02:04.495Z] 9052.52 IOPS, 35.36 MiB/s [2024-11-20T14:02:04.495Z] 9260.23 IOPS, 36.17 MiB/s [2024-11-20T14:02:04.495Z] 9467.41 IOPS, 36.98 MiB/s [2024-11-20T14:02:04.495Z] [2024-11-20 14:02:01.760012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.172 [2024-11-20 14:02:01.760086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:41:07.172 [2024-11-20 14:02:01.760130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.172 [2024-11-20 14:02:01.760141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:41:07.172 [2024-11-20 14:02:01.760186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.172 [2024-11-20 14:02:01.760195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:41:07.172 [2024-11-20 14:02:01.760209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.172 [2024-11-20 14:02:01.760217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:41:07.172 [2024-11-20 14:02:01.760231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.172 [2024-11-20 14:02:01.760239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:41:07.172 [2024-11-20 14:02:01.760253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:19080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.172 [2024-11-20 14:02:01.760261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:41:07.172 [2024-11-20 14:02:01.760275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:19112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.172 [2024-11-20 14:02:01.760283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:41:07.172 [2024-11-20 14:02:01.760296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.172 [2024-11-20 14:02:01.760304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:41:07.172 [2024-11-20 14:02:01.760317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:19176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.172 [2024-11-20 14:02:01.760325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:41:07.172 [2024-11-20 14:02:01.760338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:19760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.172 [2024-11-20 14:02:01.760347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:41:07.172 [2024-11-20 14:02:01.760933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.172 [2024-11-20 14:02:01.760957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:41:07.172 [2024-11-20 14:02:01.760975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.172 [2024-11-20 14:02:01.760984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:41:07.172 [2024-11-20 14:02:01.760998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:19392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.172 [2024-11-20 14:02:01.761007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:41:07.172 [2024-11-20 14:02:01.761020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:19424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.172 [2024-11-20 14:02:01.761029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:41:07.172 [2024-11-20 14:02:01.761053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:19456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.172 [2024-11-20 14:02:01.761062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:41:07.172 [2024-11-20 14:02:01.761075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.172 [2024-11-20 14:02:01.761084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:41:07.172 [2024-11-20 14:02:01.761098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.172 [2024-11-20 14:02:01.761106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:41:07.172 [2024-11-20 14:02:01.761123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:19824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.172 [2024-11-20 14:02:01.761131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:41:07.172 [2024-11-20 14:02:01.761145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.172 [2024-11-20 14:02:01.761153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:41:07.172 [2024-11-20 14:02:01.761166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.172 [2024-11-20 14:02:01.761175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:41:07.172 [2024-11-20 14:02:01.761188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:19208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.172 [2024-11-20 14:02:01.761197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:07.172 [2024-11-20 14:02:01.761210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:19240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.172 [2024-11-20 14:02:01.761218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:41:07.172 [2024-11-20 14:02:01.761231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:19272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.172 [2024-11-20 14:02:01.761239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:41:07.172 [2024-11-20 14:02:01.761253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:19304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.172 [2024-11-20 14:02:01.761261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:41:07.172 [2024-11-20 14:02:01.761275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:19872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.172 [2024-11-20 14:02:01.761283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:41:07.172 [2024-11-20 14:02:01.761296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:19888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.172 [2024-11-20 14:02:01.761305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:41:07.172 [2024-11-20 14:02:01.761321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.172 [2024-11-20 14:02:01.761335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:41:07.172 [2024-11-20 14:02:01.761349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.172 [2024-11-20 14:02:01.761357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:41:07.172 [2024-11-20 14:02:01.761371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.172 [2024-11-20 14:02:01.761379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:41:07.172 [2024-11-20 14:02:01.761393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:19552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.172 [2024-11-20 14:02:01.761401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:41:07.172 [2024-11-20 14:02:01.761414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:19584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.172 [2024-11-20 14:02:01.761422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:41:07.172 [2024-11-20 14:02:01.761435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:19616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.172 [2024-11-20 14:02:01.761443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:41:07.172 [2024-11-20 14:02:01.761456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:19936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.172 [2024-11-20 14:02:01.761464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:41:07.172 [2024-11-20 14:02:01.761478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.173 [2024-11-20 14:02:01.761487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:41:07.173 [2024-11-20 14:02:01.761500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:19968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.173 [2024-11-20 14:02:01.761508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:41:07.173 [2024-11-20 14:02:01.761521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:19984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.173 [2024-11-20 14:02:01.761530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:41:07.173 [2024-11-20 14:02:01.761544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.173 [2024-11-20 14:02:01.761552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:41:07.173 [2024-11-20 14:02:01.761566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:19368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.173 [2024-11-20 14:02:01.761574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:41:07.173 [2024-11-20 14:02:01.761587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:19400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.173 [2024-11-20 14:02:01.761600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:41:07.173 [2024-11-20 14:02:01.761615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:19432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:07.173 [2024-11-20 14:02:01.761623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:41:07.173 [2024-11-20 14:02:01.761636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.173 [2024-11-20 14:02:01.761645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:41:07.173 [2024-11-20 14:02:01.761658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.173 [2024-11-20 14:02:01.761667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:41:07.173 [2024-11-20 14:02:01.761692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:20032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.173 [2024-11-20 14:02:01.761702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:41:07.173 [2024-11-20 14:02:01.761726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:07.173 [2024-11-20 14:02:01.761734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:41:07.173 9595.68 IOPS, 37.48 MiB/s [2024-11-20T14:02:04.496Z] 9621.21 IOPS, 37.58 MiB/s [2024-11-20T14:02:04.496Z] Received shutdown signal, test time was about 29.950354 seconds 00:41:07.173 00:41:07.173 Latency(us) 00:41:07.173 [2024-11-20T14:02:04.496Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:07.173 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:41:07.173 Verification LBA range: start 0x0 length 0x4000 00:41:07.173 Nvme0n1 : 29.95 9642.55 37.67 0.00 0.00 13248.95 133.25 4014809.77 00:41:07.173 [2024-11-20T14:02:04.496Z] =================================================================================================================== 00:41:07.173 [2024-11-20T14:02:04.496Z] Total : 9642.55 37.67 0.00 0.00 13248.95 133.25 4014809.77 00:41:07.173 14:02:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:07.432 14:02:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:41:07.432 14:02:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:41:07.432 14:02:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:41:07.432 14:02:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:07.432 14:02:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:41:07.432 14:02:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:07.432 14:02:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:41:07.432 14:02:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:07.432 14:02:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:07.432 rmmod nvme_tcp 00:41:07.432 rmmod nvme_fabrics 00:41:07.432 rmmod nvme_keyring 00:41:07.432 14:02:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:07.432 14:02:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:41:07.432 14:02:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:41:07.432 14:02:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 76463 ']' 00:41:07.432 14:02:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 76463 00:41:07.432 14:02:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 76463 ']' 00:41:07.432 14:02:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 76463 00:41:07.432 14:02:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:41:07.432 14:02:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:07.432 14:02:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76463 00:41:07.432 14:02:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:07.432 14:02:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:07.432 14:02:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76463' 00:41:07.432 killing process with pid 76463 00:41:07.432 14:02:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 76463 00:41:07.432 14:02:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 76463 00:41:07.693 14:02:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:07.693 14:02:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:07.693 14:02:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:07.693 14:02:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:41:07.693 14:02:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:41:07.693 14:02:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:07.693 14:02:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:41:07.693 14:02:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:07.693 14:02:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:41:07.693 14:02:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:41:07.693 14:02:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:41:07.693 14:02:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:41:07.693 14:02:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:41:07.693 14:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:41:07.693 14:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:41:07.953 14:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:41:07.953 14:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:41:07.953 14:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:41:07.953 14:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:41:07.953 14:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:41:07.953 14:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:41:07.953 14:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:41:07.953 14:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:41:07.953 14:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:07.953 14:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:07.953 14:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:07.953 14:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:41:07.953 00:41:07.953 real 0m36.046s 00:41:07.953 user 1m52.784s 00:41:07.953 sys 0m11.374s 00:41:07.953 14:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:07.953 14:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:41:07.953 ************************************ 00:41:07.953 END TEST nvmf_host_multipath_status 00:41:07.953 ************************************ 00:41:07.954 14:02:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:41:07.954 14:02:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:41:07.954 14:02:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:07.954 14:02:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:41:08.214 ************************************ 00:41:08.214 START TEST nvmf_discovery_remove_ifc 00:41:08.214 ************************************ 00:41:08.214 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:41:08.214 * Looking for test storage... 00:41:08.214 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:41:08.214 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:41:08.214 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:41:08.214 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:41:08.214 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:41:08.214 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:08.214 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:08.214 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:08.214 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:41:08.214 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:41:08.214 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:41:08.214 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:41:08.214 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:41:08.214 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:41:08.214 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:41:08.214 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:08.214 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:41:08.214 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:41:08.214 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:08.214 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:08.214 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:41:08.214 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:41:08.214 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:08.214 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:41:08.214 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:41:08.214 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:41:08.214 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:41:08.214 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:08.214 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:41:08.214 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:41:08.214 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:08.214 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:08.214 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:41:08.214 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:08.214 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:41:08.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:08.214 --rc genhtml_branch_coverage=1 00:41:08.214 --rc genhtml_function_coverage=1 00:41:08.214 --rc genhtml_legend=1 00:41:08.214 --rc geninfo_all_blocks=1 00:41:08.214 --rc geninfo_unexecuted_blocks=1 00:41:08.214 00:41:08.214 ' 00:41:08.214 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:41:08.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:08.214 --rc genhtml_branch_coverage=1 00:41:08.214 --rc genhtml_function_coverage=1 00:41:08.214 --rc genhtml_legend=1 00:41:08.214 --rc geninfo_all_blocks=1 00:41:08.214 --rc geninfo_unexecuted_blocks=1 00:41:08.214 00:41:08.214 ' 00:41:08.214 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:41:08.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:08.214 --rc genhtml_branch_coverage=1 00:41:08.214 --rc genhtml_function_coverage=1 00:41:08.214 --rc genhtml_legend=1 00:41:08.214 --rc geninfo_all_blocks=1 00:41:08.214 --rc geninfo_unexecuted_blocks=1 00:41:08.214 00:41:08.214 ' 00:41:08.214 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:41:08.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:08.214 --rc genhtml_branch_coverage=1 00:41:08.214 --rc genhtml_function_coverage=1 00:41:08.214 --rc genhtml_legend=1 00:41:08.214 --rc geninfo_all_blocks=1 00:41:08.214 --rc geninfo_unexecuted_blocks=1 00:41:08.214 00:41:08.214 ' 00:41:08.214 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:41:08.214 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:41:08.214 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:08.214 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:08.214 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:08.214 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:08.214 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:08.214 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:08.214 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:08.214 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:08.214 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:08.214 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:08.475 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:41:08.475 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=105ec898-1662-46bd-85be-b241e399edb9 00:41:08.475 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:08.475 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:08.475 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:41:08.475 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:08.475 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:41:08.475 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:41:08.475 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:08.475 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:08.475 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:08.475 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:08.475 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:08.475 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:08.475 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:41:08.475 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:08.475 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:41:08.475 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:08.475 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:08.475 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:08.475 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:08.475 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:08.475 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:08.475 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:08.475 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:08.475 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:08.475 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:08.475 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:41:08.475 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:41:08.475 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:41:08.475 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:41:08.475 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:41:08.475 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:41:08.475 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:41:08.475 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:08.475 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:08.475 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:08.475 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:08.475 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:08.475 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:08.475 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:08.475 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:08.475 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:41:08.475 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:41:08.475 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:41:08.475 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:41:08.476 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:41:08.476 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:41:08.476 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:08.476 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:41:08.476 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:41:08.476 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:41:08.476 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:08.476 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:41:08.476 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:41:08.476 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:41:08.476 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:41:08.476 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:41:08.476 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:41:08.476 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:08.476 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:41:08.476 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:41:08.476 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:41:08.476 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:41:08.476 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:41:08.476 Cannot find device "nvmf_init_br" 00:41:08.476 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:41:08.476 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:41:08.476 Cannot find device "nvmf_init_br2" 00:41:08.476 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:41:08.476 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:41:08.476 Cannot find device "nvmf_tgt_br" 00:41:08.476 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:41:08.476 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:41:08.476 Cannot find device "nvmf_tgt_br2" 00:41:08.476 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:41:08.476 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:41:08.476 Cannot find device "nvmf_init_br" 00:41:08.476 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:41:08.476 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:41:08.476 Cannot find device "nvmf_init_br2" 00:41:08.476 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:41:08.476 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:41:08.476 Cannot find device "nvmf_tgt_br" 00:41:08.476 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:41:08.476 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:41:08.476 Cannot find device "nvmf_tgt_br2" 00:41:08.476 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:41:08.476 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:41:08.476 Cannot find device "nvmf_br" 00:41:08.476 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:41:08.476 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:41:08.476 Cannot find device "nvmf_init_if" 00:41:08.476 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:41:08.476 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:41:08.476 Cannot find device "nvmf_init_if2" 00:41:08.476 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:41:08.476 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:41:08.476 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:41:08.476 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:41:08.476 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:41:08.476 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:41:08.476 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:41:08.476 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:41:08.476 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:41:08.476 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:41:08.476 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:41:08.736 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:41:08.736 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:41:08.736 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:41:08.736 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:41:08.736 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:41:08.736 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:41:08.736 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:41:08.736 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:41:08.736 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:41:08.736 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:41:08.736 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:41:08.736 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:41:08.736 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:41:08.736 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:41:08.736 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:41:08.736 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:41:08.736 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:41:08.736 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:41:08.736 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:41:08.736 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:41:08.736 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:41:08.736 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:41:08.736 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:41:08.736 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:41:08.736 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:41:08.736 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:41:08.736 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:41:08.737 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:41:08.737 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:41:08.737 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:41:08.737 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:41:08.737 00:41:08.737 --- 10.0.0.3 ping statistics --- 00:41:08.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:08.737 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:41:08.737 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:41:08.737 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:41:08.737 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.090 ms 00:41:08.737 00:41:08.737 --- 10.0.0.4 ping statistics --- 00:41:08.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:08.737 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:41:08.737 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:41:08.737 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:08.737 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:41:08.737 00:41:08.737 --- 10.0.0.1 ping statistics --- 00:41:08.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:08.737 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:41:08.737 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:41:08.737 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:08.737 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:41:08.737 00:41:08.737 --- 10.0.0.2 ping statistics --- 00:41:08.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:08.737 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:41:08.737 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:08.737 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:41:08.737 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:08.737 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:08.737 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:08.737 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:08.737 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:08.737 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:08.737 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:08.737 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:41:08.737 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:08.737 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:08.737 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:08.737 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=77332 00:41:08.737 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:41:08.737 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 77332 00:41:08.737 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 77332 ']' 00:41:08.737 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:08.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:08.737 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:08.737 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:08.737 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:08.737 14:02:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:08.737 [2024-11-20 14:02:06.030144] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:41:08.737 [2024-11-20 14:02:06.030644] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:08.997 [2024-11-20 14:02:06.179685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:08.997 [2024-11-20 14:02:06.240957] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:08.997 [2024-11-20 14:02:06.241076] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:08.997 [2024-11-20 14:02:06.241114] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:08.997 [2024-11-20 14:02:06.241158] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:08.997 [2024-11-20 14:02:06.241176] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:08.997 [2024-11-20 14:02:06.241611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:08.997 [2024-11-20 14:02:06.296955] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:41:09.566 14:02:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:09.825 14:02:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:41:09.825 14:02:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:09.825 14:02:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:09.825 14:02:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:09.825 14:02:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:09.825 14:02:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:41:09.825 14:02:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:09.825 14:02:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:09.825 [2024-11-20 14:02:06.954481] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:09.825 [2024-11-20 14:02:06.962591] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:41:09.825 null0 00:41:09.825 [2024-11-20 14:02:06.994442] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:41:09.825 14:02:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:09.825 14:02:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=77363 00:41:09.825 14:02:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:41:09.825 14:02:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 77363 /tmp/host.sock 00:41:09.825 14:02:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 77363 ']' 00:41:09.825 14:02:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:41:09.825 14:02:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:09.825 14:02:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:41:09.826 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:41:09.826 14:02:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:09.826 14:02:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:09.826 [2024-11-20 14:02:07.072064] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:41:09.826 [2024-11-20 14:02:07.072196] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77363 ] 00:41:10.085 [2024-11-20 14:02:07.221691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:10.085 [2024-11-20 14:02:07.296555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:10.655 14:02:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:10.655 14:02:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:41:10.655 14:02:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:41:10.655 14:02:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:41:10.655 14:02:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:10.655 14:02:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:10.655 14:02:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:10.655 14:02:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:41:10.655 14:02:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:10.655 14:02:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:10.915 [2024-11-20 14:02:08.044441] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:41:10.915 14:02:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:10.915 14:02:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:41:10.915 14:02:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:10.915 14:02:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:11.851 [2024-11-20 14:02:09.113137] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:41:11.851 [2024-11-20 14:02:09.113168] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:41:11.851 [2024-11-20 14:02:09.113184] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:41:11.851 [2024-11-20 14:02:09.119165] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:41:12.109 [2024-11-20 14:02:09.173472] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:41:12.109 [2024-11-20 14:02:09.174373] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1484fc0:1 started. 00:41:12.109 [2024-11-20 14:02:09.176117] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:41:12.109 [2024-11-20 14:02:09.176168] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:41:12.109 [2024-11-20 14:02:09.176190] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:41:12.109 [2024-11-20 14:02:09.176205] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:41:12.109 [2024-11-20 14:02:09.176227] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:41:12.109 14:02:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:12.109 14:02:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:41:12.109 14:02:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:41:12.109 [2024-11-20 14:02:09.181744] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1484fc0 was disconnected and freed. delete nvme_qpair. 00:41:12.109 14:02:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:41:12.109 14:02:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:41:12.109 14:02:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:12.109 14:02:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:41:12.109 14:02:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:12.109 14:02:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:41:12.109 14:02:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:12.109 14:02:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:41:12.109 14:02:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:41:12.109 14:02:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:41:12.109 14:02:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:41:12.109 14:02:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:41:12.109 14:02:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:41:12.109 14:02:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:41:12.109 14:02:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:41:12.109 14:02:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:41:12.109 14:02:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:12.109 14:02:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:12.109 14:02:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:12.109 14:02:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:41:12.109 14:02:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:41:13.044 14:02:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:41:13.044 14:02:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:41:13.044 14:02:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:41:13.044 14:02:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:13.044 14:02:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:13.044 14:02:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:41:13.044 14:02:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:41:13.044 14:02:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:13.303 14:02:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:41:13.303 14:02:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:41:14.242 14:02:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:41:14.243 14:02:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:41:14.243 14:02:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:41:14.243 14:02:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:14.243 14:02:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:14.243 14:02:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:41:14.243 14:02:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:41:14.243 14:02:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:14.243 14:02:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:41:14.243 14:02:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:41:15.180 14:02:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:41:15.180 14:02:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:41:15.180 14:02:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:41:15.180 14:02:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:15.180 14:02:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:41:15.180 14:02:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:15.180 14:02:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:41:15.180 14:02:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:15.180 14:02:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:41:15.180 14:02:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:41:16.569 14:02:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:41:16.569 14:02:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:41:16.569 14:02:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:41:16.569 14:02:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:16.569 14:02:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:41:16.569 14:02:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:16.569 14:02:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:41:16.569 14:02:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:16.569 14:02:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:41:16.569 14:02:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:41:17.509 14:02:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:41:17.509 14:02:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:41:17.509 14:02:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:41:17.509 14:02:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:17.509 14:02:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:41:17.509 14:02:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:17.509 14:02:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:41:17.509 14:02:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:17.509 [2024-11-20 14:02:14.593674] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:41:17.509 [2024-11-20 14:02:14.593754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:41:17.509 [2024-11-20 14:02:14.593768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:17.509 [2024-11-20 14:02:14.593778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:41:17.509 [2024-11-20 14:02:14.593784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:17.509 [2024-11-20 14:02:14.593792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:41:17.509 [2024-11-20 14:02:14.593798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:17.509 [2024-11-20 14:02:14.593805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:41:17.509 [2024-11-20 14:02:14.593812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:17.509 [2024-11-20 14:02:14.593819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:41:17.509 [2024-11-20 14:02:14.593825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:17.509 [2024-11-20 14:02:14.593832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1461240 is same with the state(6) to be set 00:41:17.509 [2024-11-20 14:02:14.603650] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1461240 (9): Bad file descriptor 00:41:17.509 14:02:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:41:17.509 14:02:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:41:17.509 [2024-11-20 14:02:14.613648] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:41:17.509 [2024-11-20 14:02:14.613669] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:41:17.509 [2024-11-20 14:02:14.613674] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:41:17.509 [2024-11-20 14:02:14.613678] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:41:17.509 [2024-11-20 14:02:14.613721] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:41:18.447 14:02:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:41:18.447 14:02:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:41:18.447 14:02:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:41:18.447 14:02:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:18.447 14:02:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:41:18.447 14:02:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:18.447 14:02:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:41:18.447 [2024-11-20 14:02:15.673808] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:41:18.447 [2024-11-20 14:02:15.673940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1461240 with addr=10.0.0.3, port=4420 00:41:18.447 [2024-11-20 14:02:15.673980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1461240 is same with the state(6) to be set 00:41:18.447 [2024-11-20 14:02:15.674053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1461240 (9): Bad file descriptor 00:41:18.447 [2024-11-20 14:02:15.675346] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:41:18.447 [2024-11-20 14:02:15.675455] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:41:18.447 [2024-11-20 14:02:15.675482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:41:18.447 [2024-11-20 14:02:15.675507] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:41:18.447 [2024-11-20 14:02:15.675528] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:41:18.447 [2024-11-20 14:02:15.675544] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:41:18.447 [2024-11-20 14:02:15.675557] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:41:18.447 [2024-11-20 14:02:15.675581] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:41:18.447 [2024-11-20 14:02:15.675595] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:41:18.447 14:02:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:18.447 14:02:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:41:18.447 14:02:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:41:19.388 [2024-11-20 14:02:16.673761] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:41:19.388 [2024-11-20 14:02:16.673795] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:41:19.388 [2024-11-20 14:02:16.673818] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:41:19.388 [2024-11-20 14:02:16.673826] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:41:19.388 [2024-11-20 14:02:16.673834] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:41:19.388 [2024-11-20 14:02:16.673841] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:41:19.388 [2024-11-20 14:02:16.673845] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:41:19.388 [2024-11-20 14:02:16.673849] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:41:19.388 [2024-11-20 14:02:16.673882] bdev_nvme.c:7230:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:41:19.388 [2024-11-20 14:02:16.673917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:41:19.388 [2024-11-20 14:02:16.673928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:19.388 [2024-11-20 14:02:16.673939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:41:19.388 [2024-11-20 14:02:16.673946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:19.388 [2024-11-20 14:02:16.673954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:41:19.388 [2024-11-20 14:02:16.673961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:19.388 [2024-11-20 14:02:16.673981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:41:19.388 [2024-11-20 14:02:16.674005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:19.388 [2024-11-20 14:02:16.674013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:41:19.388 [2024-11-20 14:02:16.674019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:19.388 [2024-11-20 14:02:16.674026] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:41:19.388 [2024-11-20 14:02:16.674749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13eca20 (9): Bad file descriptor 00:41:19.388 [2024-11-20 14:02:16.675760] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:41:19.388 [2024-11-20 14:02:16.675786] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:41:19.648 14:02:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:41:19.648 14:02:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:41:19.648 14:02:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:19.648 14:02:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:41:19.648 14:02:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:19.648 14:02:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:41:19.648 14:02:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:41:19.648 14:02:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:19.648 14:02:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:41:19.648 14:02:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:41:19.648 14:02:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:41:19.648 14:02:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:41:19.648 14:02:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:41:19.648 14:02:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:41:19.648 14:02:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:41:19.648 14:02:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:41:19.648 14:02:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:19.648 14:02:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:41:19.648 14:02:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:19.648 14:02:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:19.648 14:02:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:41:19.648 14:02:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:41:20.585 14:02:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:41:20.585 14:02:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:41:20.585 14:02:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:20.585 14:02:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:41:20.585 14:02:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:20.585 14:02:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:41:20.585 14:02:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:41:20.585 14:02:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:20.585 14:02:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:41:20.585 14:02:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:41:21.537 [2024-11-20 14:02:18.679656] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:41:21.537 [2024-11-20 14:02:18.679681] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:41:21.537 [2024-11-20 14:02:18.679696] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:41:21.537 [2024-11-20 14:02:18.685679] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:41:21.537 [2024-11-20 14:02:18.739902] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:41:21.537 [2024-11-20 14:02:18.740627] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x143fa60:1 started. 00:41:21.537 [2024-11-20 14:02:18.741815] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:41:21.537 [2024-11-20 14:02:18.741858] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:41:21.537 [2024-11-20 14:02:18.741879] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:41:21.537 [2024-11-20 14:02:18.741898] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:41:21.537 [2024-11-20 14:02:18.741905] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:41:21.537 [2024-11-20 14:02:18.748433] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x143fa60 was disconnected and freed. delete nvme_qpair. 00:41:21.796 14:02:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:41:21.796 14:02:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:41:21.796 14:02:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:21.796 14:02:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:41:21.796 14:02:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:41:21.796 14:02:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:21.796 14:02:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:41:21.796 14:02:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:21.796 14:02:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:41:21.796 14:02:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:41:21.796 14:02:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 77363 00:41:21.796 14:02:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 77363 ']' 00:41:21.796 14:02:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 77363 00:41:21.796 14:02:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:41:21.796 14:02:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:21.796 14:02:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77363 00:41:21.796 killing process with pid 77363 00:41:21.796 14:02:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:21.796 14:02:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:21.796 14:02:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77363' 00:41:21.796 14:02:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 77363 00:41:21.796 14:02:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 77363 00:41:22.056 14:02:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:41:22.056 14:02:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:22.056 14:02:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:41:22.056 14:02:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:22.056 14:02:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:41:22.056 14:02:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:22.056 14:02:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:22.056 rmmod nvme_tcp 00:41:22.056 rmmod nvme_fabrics 00:41:22.056 rmmod nvme_keyring 00:41:22.056 14:02:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:22.056 14:02:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:41:22.056 14:02:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:41:22.056 14:02:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 77332 ']' 00:41:22.056 14:02:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 77332 00:41:22.056 14:02:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 77332 ']' 00:41:22.056 14:02:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 77332 00:41:22.056 14:02:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:41:22.056 14:02:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:22.316 14:02:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77332 00:41:22.316 killing process with pid 77332 00:41:22.316 14:02:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:41:22.316 14:02:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:41:22.316 14:02:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77332' 00:41:22.316 14:02:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 77332 00:41:22.316 14:02:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 77332 00:41:22.316 14:02:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:22.316 14:02:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:22.316 14:02:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:22.316 14:02:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:41:22.316 14:02:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:41:22.316 14:02:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:41:22.316 14:02:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:22.316 14:02:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:22.316 14:02:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:41:22.316 14:02:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:41:22.576 14:02:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:41:22.576 14:02:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:41:22.576 14:02:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:41:22.576 14:02:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:41:22.576 14:02:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:41:22.576 14:02:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:41:22.576 14:02:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:41:22.576 14:02:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:41:22.576 14:02:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:41:22.576 14:02:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:41:22.576 14:02:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:41:22.576 14:02:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:41:22.576 14:02:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:41:22.576 14:02:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:22.576 14:02:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:22.576 14:02:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:22.836 14:02:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:41:22.836 00:41:22.836 real 0m14.636s 00:41:22.836 user 0m24.466s 00:41:22.836 sys 0m2.847s 00:41:22.836 14:02:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:22.836 14:02:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:22.836 ************************************ 00:41:22.836 END TEST nvmf_discovery_remove_ifc 00:41:22.836 ************************************ 00:41:22.836 14:02:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:41:22.836 14:02:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:41:22.836 14:02:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:22.836 14:02:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:41:22.836 ************************************ 00:41:22.836 START TEST nvmf_identify_kernel_target 00:41:22.836 ************************************ 00:41:22.836 14:02:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:41:22.836 * Looking for test storage... 00:41:22.836 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:41:22.836 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:41:22.836 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:41:22.836 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:41:23.098 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:41:23.098 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:23.098 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:23.098 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:23.098 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:41:23.098 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:41:23.098 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:41:23.098 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:41:23.098 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:41:23.098 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:41:23.098 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:41:23.098 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:23.098 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:41:23.098 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:41:23.098 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:23.098 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:23.098 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:41:23.098 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:41:23.098 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:23.098 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:41:23.098 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:41:23.098 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:41:23.098 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:41:23.098 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:23.098 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:41:23.098 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:41:23.098 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:23.098 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:23.098 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:41:23.098 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:23.098 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:41:23.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:23.098 --rc genhtml_branch_coverage=1 00:41:23.098 --rc genhtml_function_coverage=1 00:41:23.098 --rc genhtml_legend=1 00:41:23.098 --rc geninfo_all_blocks=1 00:41:23.098 --rc geninfo_unexecuted_blocks=1 00:41:23.098 00:41:23.098 ' 00:41:23.098 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:41:23.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:23.098 --rc genhtml_branch_coverage=1 00:41:23.098 --rc genhtml_function_coverage=1 00:41:23.098 --rc genhtml_legend=1 00:41:23.098 --rc geninfo_all_blocks=1 00:41:23.098 --rc geninfo_unexecuted_blocks=1 00:41:23.098 00:41:23.098 ' 00:41:23.098 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:41:23.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:23.098 --rc genhtml_branch_coverage=1 00:41:23.098 --rc genhtml_function_coverage=1 00:41:23.098 --rc genhtml_legend=1 00:41:23.098 --rc geninfo_all_blocks=1 00:41:23.098 --rc geninfo_unexecuted_blocks=1 00:41:23.098 00:41:23.098 ' 00:41:23.098 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:41:23.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:23.098 --rc genhtml_branch_coverage=1 00:41:23.098 --rc genhtml_function_coverage=1 00:41:23.098 --rc genhtml_legend=1 00:41:23.098 --rc geninfo_all_blocks=1 00:41:23.098 --rc geninfo_unexecuted_blocks=1 00:41:23.098 00:41:23.098 ' 00:41:23.098 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:41:23.098 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:41:23.098 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:23.098 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:23.098 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:23.098 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:23.098 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:23.098 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:23.098 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:23.098 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:23.098 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:23.098 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:23.098 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:41:23.098 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=105ec898-1662-46bd-85be-b241e399edb9 00:41:23.098 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:23.098 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:23.098 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:41:23.098 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:23.098 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:41:23.098 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:41:23.098 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:23.098 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:23.098 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:23.098 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:23.099 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:23.099 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:23.099 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:41:23.099 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:23.099 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:41:23.099 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:23.099 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:23.099 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:23.099 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:23.099 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:23.099 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:23.099 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:23.099 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:23.099 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:23.099 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:23.099 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:41:23.099 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:23.099 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:23.099 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:23.099 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:23.099 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:23.099 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:23.099 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:23.099 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:23.099 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:41:23.099 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:41:23.099 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:41:23.099 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:41:23.099 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:41:23.099 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:41:23.099 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:23.099 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:41:23.099 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:41:23.099 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:41:23.099 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:23.099 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:41:23.099 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:41:23.099 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:41:23.099 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:41:23.099 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:41:23.099 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:41:23.099 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:23.099 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:41:23.099 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:41:23.099 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:41:23.099 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:41:23.099 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:41:23.099 Cannot find device "nvmf_init_br" 00:41:23.099 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:41:23.099 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:41:23.099 Cannot find device "nvmf_init_br2" 00:41:23.099 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:41:23.099 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:41:23.099 Cannot find device "nvmf_tgt_br" 00:41:23.099 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:41:23.099 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:41:23.099 Cannot find device "nvmf_tgt_br2" 00:41:23.099 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:41:23.099 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:41:23.099 Cannot find device "nvmf_init_br" 00:41:23.099 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:41:23.099 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:41:23.099 Cannot find device "nvmf_init_br2" 00:41:23.099 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:41:23.099 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:41:23.099 Cannot find device "nvmf_tgt_br" 00:41:23.099 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:41:23.099 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:41:23.099 Cannot find device "nvmf_tgt_br2" 00:41:23.099 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:41:23.099 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:41:23.359 Cannot find device "nvmf_br" 00:41:23.359 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:41:23.359 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:41:23.359 Cannot find device "nvmf_init_if" 00:41:23.359 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:41:23.359 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:41:23.359 Cannot find device "nvmf_init_if2" 00:41:23.359 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:41:23.359 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:41:23.359 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:41:23.359 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:41:23.359 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:41:23.359 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:41:23.359 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:41:23.359 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:41:23.359 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:41:23.359 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:41:23.359 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:41:23.359 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:41:23.359 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:41:23.359 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:41:23.359 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:41:23.359 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:41:23.359 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:41:23.359 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:41:23.359 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:41:23.359 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:41:23.359 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:41:23.359 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:41:23.359 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:41:23.359 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:41:23.359 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:41:23.359 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:41:23.359 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:41:23.359 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:41:23.359 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:41:23.359 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:41:23.359 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:41:23.360 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:41:23.360 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:41:23.360 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:41:23.360 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:41:23.360 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:41:23.360 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:41:23.360 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:41:23.360 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:41:23.360 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:41:23.360 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:41:23.360 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:41:23.360 00:41:23.360 --- 10.0.0.3 ping statistics --- 00:41:23.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:23.360 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:41:23.360 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:41:23.360 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:41:23.360 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.030 ms 00:41:23.360 00:41:23.360 --- 10.0.0.4 ping statistics --- 00:41:23.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:23.360 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:41:23.360 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:41:23.620 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:23.620 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:41:23.620 00:41:23.620 --- 10.0.0.1 ping statistics --- 00:41:23.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:23.620 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:41:23.620 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:41:23.620 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:23.620 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:41:23.620 00:41:23.620 --- 10.0.0.2 ping statistics --- 00:41:23.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:23.620 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:41:23.620 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:23.620 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:41:23.620 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:23.620 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:23.620 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:23.620 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:23.620 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:23.620 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:23.620 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:23.620 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:41:23.620 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:41:23.620 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:41:23.620 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:23.620 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:23.620 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:23.620 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:23.620 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:23.620 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:23.620 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:23.620 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:23.620 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:23.620 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:41:23.620 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:41:23.620 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:41:23.620 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:41:23.620 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:41:23.620 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:41:23.620 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:41:23.620 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:41:23.620 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:41:23.620 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:41:23.620 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:41:23.620 14:02:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:41:24.190 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:41:24.190 Waiting for block devices as requested 00:41:24.190 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:41:24.190 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:41:24.190 14:02:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:41:24.190 14:02:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:41:24.190 14:02:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:41:24.190 14:02:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:41:24.190 14:02:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:41:24.190 14:02:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:41:24.190 14:02:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:41:24.190 14:02:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:41:24.190 14:02:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:41:24.450 No valid GPT data, bailing 00:41:24.450 14:02:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:41:24.450 14:02:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:41:24.450 14:02:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:41:24.450 14:02:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:41:24.450 14:02:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:41:24.450 14:02:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:41:24.450 14:02:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:41:24.450 14:02:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:41:24.450 14:02:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:41:24.450 14:02:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:41:24.450 14:02:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:41:24.450 14:02:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:41:24.450 14:02:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:41:24.450 No valid GPT data, bailing 00:41:24.450 14:02:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:41:24.450 14:02:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:41:24.450 14:02:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:41:24.450 14:02:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:41:24.450 14:02:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:41:24.450 14:02:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:41:24.450 14:02:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:41:24.451 14:02:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:41:24.451 14:02:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:41:24.451 14:02:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:41:24.451 14:02:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:41:24.451 14:02:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:41:24.451 14:02:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:41:24.451 No valid GPT data, bailing 00:41:24.451 14:02:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:41:24.451 14:02:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:41:24.451 14:02:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:41:24.451 14:02:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:41:24.451 14:02:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:41:24.451 14:02:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:41:24.451 14:02:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:41:24.451 14:02:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:41:24.451 14:02:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:41:24.451 14:02:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:41:24.451 14:02:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:41:24.451 14:02:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:41:24.451 14:02:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:41:24.710 No valid GPT data, bailing 00:41:24.710 14:02:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:41:24.710 14:02:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:41:24.710 14:02:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:41:24.710 14:02:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:41:24.710 14:02:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:41:24.710 14:02:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:41:24.710 14:02:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:41:24.710 14:02:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:41:24.710 14:02:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:41:24.710 14:02:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:41:24.710 14:02:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:41:24.710 14:02:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:41:24.710 14:02:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:41:24.710 14:02:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:41:24.710 14:02:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:41:24.710 14:02:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:41:24.710 14:02:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:41:24.710 14:02:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid=105ec898-1662-46bd-85be-b241e399edb9 -a 10.0.0.1 -t tcp -s 4420 00:41:24.710 00:41:24.710 Discovery Log Number of Records 2, Generation counter 2 00:41:24.710 =====Discovery Log Entry 0====== 00:41:24.710 trtype: tcp 00:41:24.710 adrfam: ipv4 00:41:24.710 subtype: current discovery subsystem 00:41:24.710 treq: not specified, sq flow control disable supported 00:41:24.710 portid: 1 00:41:24.710 trsvcid: 4420 00:41:24.710 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:41:24.710 traddr: 10.0.0.1 00:41:24.710 eflags: none 00:41:24.710 sectype: none 00:41:24.710 =====Discovery Log Entry 1====== 00:41:24.710 trtype: tcp 00:41:24.710 adrfam: ipv4 00:41:24.710 subtype: nvme subsystem 00:41:24.710 treq: not specified, sq flow control disable supported 00:41:24.710 portid: 1 00:41:24.710 trsvcid: 4420 00:41:24.710 subnqn: nqn.2016-06.io.spdk:testnqn 00:41:24.710 traddr: 10.0.0.1 00:41:24.710 eflags: none 00:41:24.710 sectype: none 00:41:24.710 14:02:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:41:24.710 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:41:24.971 ===================================================== 00:41:24.971 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:41:24.971 ===================================================== 00:41:24.971 Controller Capabilities/Features 00:41:24.971 ================================ 00:41:24.971 Vendor ID: 0000 00:41:24.971 Subsystem Vendor ID: 0000 00:41:24.971 Serial Number: dc4004776b674242e48b 00:41:24.971 Model Number: Linux 00:41:24.971 Firmware Version: 6.8.9-20 00:41:24.971 Recommended Arb Burst: 0 00:41:24.971 IEEE OUI Identifier: 00 00 00 00:41:24.971 Multi-path I/O 00:41:24.971 May have multiple subsystem ports: No 00:41:24.971 May have multiple controllers: No 00:41:24.972 Associated with SR-IOV VF: No 00:41:24.972 Max Data Transfer Size: Unlimited 00:41:24.972 Max Number of Namespaces: 0 00:41:24.972 Max Number of I/O Queues: 1024 00:41:24.972 NVMe Specification Version (VS): 1.3 00:41:24.972 NVMe Specification Version (Identify): 1.3 00:41:24.972 Maximum Queue Entries: 1024 00:41:24.972 Contiguous Queues Required: No 00:41:24.972 Arbitration Mechanisms Supported 00:41:24.972 Weighted Round Robin: Not Supported 00:41:24.972 Vendor Specific: Not Supported 00:41:24.972 Reset Timeout: 7500 ms 00:41:24.972 Doorbell Stride: 4 bytes 00:41:24.972 NVM Subsystem Reset: Not Supported 00:41:24.972 Command Sets Supported 00:41:24.972 NVM Command Set: Supported 00:41:24.972 Boot Partition: Not Supported 00:41:24.972 Memory Page Size Minimum: 4096 bytes 00:41:24.972 Memory Page Size Maximum: 4096 bytes 00:41:24.972 Persistent Memory Region: Not Supported 00:41:24.972 Optional Asynchronous Events Supported 00:41:24.972 Namespace Attribute Notices: Not Supported 00:41:24.972 Firmware Activation Notices: Not Supported 00:41:24.972 ANA Change Notices: Not Supported 00:41:24.972 PLE Aggregate Log Change Notices: Not Supported 00:41:24.972 LBA Status Info Alert Notices: Not Supported 00:41:24.972 EGE Aggregate Log Change Notices: Not Supported 00:41:24.972 Normal NVM Subsystem Shutdown event: Not Supported 00:41:24.972 Zone Descriptor Change Notices: Not Supported 00:41:24.972 Discovery Log Change Notices: Supported 00:41:24.972 Controller Attributes 00:41:24.972 128-bit Host Identifier: Not Supported 00:41:24.972 Non-Operational Permissive Mode: Not Supported 00:41:24.972 NVM Sets: Not Supported 00:41:24.972 Read Recovery Levels: Not Supported 00:41:24.972 Endurance Groups: Not Supported 00:41:24.972 Predictable Latency Mode: Not Supported 00:41:24.972 Traffic Based Keep ALive: Not Supported 00:41:24.972 Namespace Granularity: Not Supported 00:41:24.972 SQ Associations: Not Supported 00:41:24.972 UUID List: Not Supported 00:41:24.972 Multi-Domain Subsystem: Not Supported 00:41:24.972 Fixed Capacity Management: Not Supported 00:41:24.972 Variable Capacity Management: Not Supported 00:41:24.972 Delete Endurance Group: Not Supported 00:41:24.972 Delete NVM Set: Not Supported 00:41:24.972 Extended LBA Formats Supported: Not Supported 00:41:24.972 Flexible Data Placement Supported: Not Supported 00:41:24.972 00:41:24.972 Controller Memory Buffer Support 00:41:24.972 ================================ 00:41:24.972 Supported: No 00:41:24.972 00:41:24.972 Persistent Memory Region Support 00:41:24.972 ================================ 00:41:24.972 Supported: No 00:41:24.972 00:41:24.972 Admin Command Set Attributes 00:41:24.972 ============================ 00:41:24.972 Security Send/Receive: Not Supported 00:41:24.972 Format NVM: Not Supported 00:41:24.972 Firmware Activate/Download: Not Supported 00:41:24.972 Namespace Management: Not Supported 00:41:24.972 Device Self-Test: Not Supported 00:41:24.972 Directives: Not Supported 00:41:24.972 NVMe-MI: Not Supported 00:41:24.972 Virtualization Management: Not Supported 00:41:24.972 Doorbell Buffer Config: Not Supported 00:41:24.972 Get LBA Status Capability: Not Supported 00:41:24.972 Command & Feature Lockdown Capability: Not Supported 00:41:24.972 Abort Command Limit: 1 00:41:24.972 Async Event Request Limit: 1 00:41:24.972 Number of Firmware Slots: N/A 00:41:24.972 Firmware Slot 1 Read-Only: N/A 00:41:24.972 Firmware Activation Without Reset: N/A 00:41:24.972 Multiple Update Detection Support: N/A 00:41:24.972 Firmware Update Granularity: No Information Provided 00:41:24.972 Per-Namespace SMART Log: No 00:41:24.972 Asymmetric Namespace Access Log Page: Not Supported 00:41:24.972 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:41:24.972 Command Effects Log Page: Not Supported 00:41:24.972 Get Log Page Extended Data: Supported 00:41:24.972 Telemetry Log Pages: Not Supported 00:41:24.972 Persistent Event Log Pages: Not Supported 00:41:24.972 Supported Log Pages Log Page: May Support 00:41:24.972 Commands Supported & Effects Log Page: Not Supported 00:41:24.972 Feature Identifiers & Effects Log Page:May Support 00:41:24.972 NVMe-MI Commands & Effects Log Page: May Support 00:41:24.972 Data Area 4 for Telemetry Log: Not Supported 00:41:24.972 Error Log Page Entries Supported: 1 00:41:24.972 Keep Alive: Not Supported 00:41:24.972 00:41:24.972 NVM Command Set Attributes 00:41:24.972 ========================== 00:41:24.972 Submission Queue Entry Size 00:41:24.972 Max: 1 00:41:24.972 Min: 1 00:41:24.972 Completion Queue Entry Size 00:41:24.972 Max: 1 00:41:24.972 Min: 1 00:41:24.972 Number of Namespaces: 0 00:41:24.972 Compare Command: Not Supported 00:41:24.972 Write Uncorrectable Command: Not Supported 00:41:24.972 Dataset Management Command: Not Supported 00:41:24.972 Write Zeroes Command: Not Supported 00:41:24.972 Set Features Save Field: Not Supported 00:41:24.972 Reservations: Not Supported 00:41:24.972 Timestamp: Not Supported 00:41:24.972 Copy: Not Supported 00:41:24.972 Volatile Write Cache: Not Present 00:41:24.972 Atomic Write Unit (Normal): 1 00:41:24.972 Atomic Write Unit (PFail): 1 00:41:24.972 Atomic Compare & Write Unit: 1 00:41:24.972 Fused Compare & Write: Not Supported 00:41:24.972 Scatter-Gather List 00:41:24.972 SGL Command Set: Supported 00:41:24.972 SGL Keyed: Not Supported 00:41:24.972 SGL Bit Bucket Descriptor: Not Supported 00:41:24.972 SGL Metadata Pointer: Not Supported 00:41:24.972 Oversized SGL: Not Supported 00:41:24.972 SGL Metadata Address: Not Supported 00:41:24.972 SGL Offset: Supported 00:41:24.972 Transport SGL Data Block: Not Supported 00:41:24.972 Replay Protected Memory Block: Not Supported 00:41:24.972 00:41:24.972 Firmware Slot Information 00:41:24.972 ========================= 00:41:24.972 Active slot: 0 00:41:24.972 00:41:24.972 00:41:24.972 Error Log 00:41:24.972 ========= 00:41:24.972 00:41:24.972 Active Namespaces 00:41:24.972 ================= 00:41:24.972 Discovery Log Page 00:41:24.972 ================== 00:41:24.972 Generation Counter: 2 00:41:24.972 Number of Records: 2 00:41:24.972 Record Format: 0 00:41:24.972 00:41:24.972 Discovery Log Entry 0 00:41:24.972 ---------------------- 00:41:24.972 Transport Type: 3 (TCP) 00:41:24.972 Address Family: 1 (IPv4) 00:41:24.972 Subsystem Type: 3 (Current Discovery Subsystem) 00:41:24.972 Entry Flags: 00:41:24.972 Duplicate Returned Information: 0 00:41:24.972 Explicit Persistent Connection Support for Discovery: 0 00:41:24.972 Transport Requirements: 00:41:24.972 Secure Channel: Not Specified 00:41:24.972 Port ID: 1 (0x0001) 00:41:24.972 Controller ID: 65535 (0xffff) 00:41:24.972 Admin Max SQ Size: 32 00:41:24.972 Transport Service Identifier: 4420 00:41:24.972 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:41:24.972 Transport Address: 10.0.0.1 00:41:24.972 Discovery Log Entry 1 00:41:24.972 ---------------------- 00:41:24.972 Transport Type: 3 (TCP) 00:41:24.972 Address Family: 1 (IPv4) 00:41:24.972 Subsystem Type: 2 (NVM Subsystem) 00:41:24.972 Entry Flags: 00:41:24.972 Duplicate Returned Information: 0 00:41:24.972 Explicit Persistent Connection Support for Discovery: 0 00:41:24.972 Transport Requirements: 00:41:24.972 Secure Channel: Not Specified 00:41:24.972 Port ID: 1 (0x0001) 00:41:24.972 Controller ID: 65535 (0xffff) 00:41:24.972 Admin Max SQ Size: 32 00:41:24.972 Transport Service Identifier: 4420 00:41:24.972 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:41:24.972 Transport Address: 10.0.0.1 00:41:24.972 14:02:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:24.972 get_feature(0x01) failed 00:41:24.972 get_feature(0x02) failed 00:41:24.972 get_feature(0x04) failed 00:41:24.972 ===================================================== 00:41:24.972 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:41:24.972 ===================================================== 00:41:24.972 Controller Capabilities/Features 00:41:24.972 ================================ 00:41:24.972 Vendor ID: 0000 00:41:24.972 Subsystem Vendor ID: 0000 00:41:24.972 Serial Number: b2af2b396373d449fc4d 00:41:24.972 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:41:24.972 Firmware Version: 6.8.9-20 00:41:24.972 Recommended Arb Burst: 6 00:41:24.972 IEEE OUI Identifier: 00 00 00 00:41:24.972 Multi-path I/O 00:41:24.972 May have multiple subsystem ports: Yes 00:41:24.972 May have multiple controllers: Yes 00:41:24.973 Associated with SR-IOV VF: No 00:41:24.973 Max Data Transfer Size: Unlimited 00:41:24.973 Max Number of Namespaces: 1024 00:41:24.973 Max Number of I/O Queues: 128 00:41:24.973 NVMe Specification Version (VS): 1.3 00:41:24.973 NVMe Specification Version (Identify): 1.3 00:41:24.973 Maximum Queue Entries: 1024 00:41:24.973 Contiguous Queues Required: No 00:41:24.973 Arbitration Mechanisms Supported 00:41:24.973 Weighted Round Robin: Not Supported 00:41:24.973 Vendor Specific: Not Supported 00:41:24.973 Reset Timeout: 7500 ms 00:41:24.973 Doorbell Stride: 4 bytes 00:41:24.973 NVM Subsystem Reset: Not Supported 00:41:24.973 Command Sets Supported 00:41:24.973 NVM Command Set: Supported 00:41:24.973 Boot Partition: Not Supported 00:41:24.973 Memory Page Size Minimum: 4096 bytes 00:41:24.973 Memory Page Size Maximum: 4096 bytes 00:41:24.973 Persistent Memory Region: Not Supported 00:41:24.973 Optional Asynchronous Events Supported 00:41:24.973 Namespace Attribute Notices: Supported 00:41:24.973 Firmware Activation Notices: Not Supported 00:41:24.973 ANA Change Notices: Supported 00:41:24.973 PLE Aggregate Log Change Notices: Not Supported 00:41:24.973 LBA Status Info Alert Notices: Not Supported 00:41:24.973 EGE Aggregate Log Change Notices: Not Supported 00:41:24.973 Normal NVM Subsystem Shutdown event: Not Supported 00:41:24.973 Zone Descriptor Change Notices: Not Supported 00:41:24.973 Discovery Log Change Notices: Not Supported 00:41:24.973 Controller Attributes 00:41:24.973 128-bit Host Identifier: Supported 00:41:24.973 Non-Operational Permissive Mode: Not Supported 00:41:24.973 NVM Sets: Not Supported 00:41:24.973 Read Recovery Levels: Not Supported 00:41:24.973 Endurance Groups: Not Supported 00:41:24.973 Predictable Latency Mode: Not Supported 00:41:24.973 Traffic Based Keep ALive: Supported 00:41:24.973 Namespace Granularity: Not Supported 00:41:24.973 SQ Associations: Not Supported 00:41:24.973 UUID List: Not Supported 00:41:24.973 Multi-Domain Subsystem: Not Supported 00:41:24.973 Fixed Capacity Management: Not Supported 00:41:24.973 Variable Capacity Management: Not Supported 00:41:24.973 Delete Endurance Group: Not Supported 00:41:24.973 Delete NVM Set: Not Supported 00:41:24.973 Extended LBA Formats Supported: Not Supported 00:41:24.973 Flexible Data Placement Supported: Not Supported 00:41:24.973 00:41:24.973 Controller Memory Buffer Support 00:41:24.973 ================================ 00:41:24.973 Supported: No 00:41:24.973 00:41:24.973 Persistent Memory Region Support 00:41:24.973 ================================ 00:41:24.973 Supported: No 00:41:24.973 00:41:24.973 Admin Command Set Attributes 00:41:24.973 ============================ 00:41:24.973 Security Send/Receive: Not Supported 00:41:24.973 Format NVM: Not Supported 00:41:24.973 Firmware Activate/Download: Not Supported 00:41:24.973 Namespace Management: Not Supported 00:41:24.973 Device Self-Test: Not Supported 00:41:24.973 Directives: Not Supported 00:41:24.973 NVMe-MI: Not Supported 00:41:24.973 Virtualization Management: Not Supported 00:41:24.973 Doorbell Buffer Config: Not Supported 00:41:24.973 Get LBA Status Capability: Not Supported 00:41:24.973 Command & Feature Lockdown Capability: Not Supported 00:41:24.973 Abort Command Limit: 4 00:41:24.973 Async Event Request Limit: 4 00:41:24.973 Number of Firmware Slots: N/A 00:41:24.973 Firmware Slot 1 Read-Only: N/A 00:41:24.973 Firmware Activation Without Reset: N/A 00:41:24.973 Multiple Update Detection Support: N/A 00:41:24.973 Firmware Update Granularity: No Information Provided 00:41:24.973 Per-Namespace SMART Log: Yes 00:41:24.973 Asymmetric Namespace Access Log Page: Supported 00:41:24.973 ANA Transition Time : 10 sec 00:41:24.973 00:41:24.973 Asymmetric Namespace Access Capabilities 00:41:24.973 ANA Optimized State : Supported 00:41:24.973 ANA Non-Optimized State : Supported 00:41:24.973 ANA Inaccessible State : Supported 00:41:24.973 ANA Persistent Loss State : Supported 00:41:24.973 ANA Change State : Supported 00:41:24.973 ANAGRPID is not changed : No 00:41:24.973 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:41:24.973 00:41:24.973 ANA Group Identifier Maximum : 128 00:41:24.973 Number of ANA Group Identifiers : 128 00:41:24.973 Max Number of Allowed Namespaces : 1024 00:41:24.973 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:41:24.973 Command Effects Log Page: Supported 00:41:24.973 Get Log Page Extended Data: Supported 00:41:24.973 Telemetry Log Pages: Not Supported 00:41:24.973 Persistent Event Log Pages: Not Supported 00:41:24.973 Supported Log Pages Log Page: May Support 00:41:24.973 Commands Supported & Effects Log Page: Not Supported 00:41:24.973 Feature Identifiers & Effects Log Page:May Support 00:41:24.973 NVMe-MI Commands & Effects Log Page: May Support 00:41:24.973 Data Area 4 for Telemetry Log: Not Supported 00:41:24.973 Error Log Page Entries Supported: 128 00:41:24.973 Keep Alive: Supported 00:41:24.973 Keep Alive Granularity: 1000 ms 00:41:24.973 00:41:24.973 NVM Command Set Attributes 00:41:24.973 ========================== 00:41:24.973 Submission Queue Entry Size 00:41:24.973 Max: 64 00:41:24.973 Min: 64 00:41:24.973 Completion Queue Entry Size 00:41:24.973 Max: 16 00:41:24.973 Min: 16 00:41:24.973 Number of Namespaces: 1024 00:41:24.973 Compare Command: Not Supported 00:41:24.973 Write Uncorrectable Command: Not Supported 00:41:24.973 Dataset Management Command: Supported 00:41:24.973 Write Zeroes Command: Supported 00:41:24.973 Set Features Save Field: Not Supported 00:41:24.973 Reservations: Not Supported 00:41:24.973 Timestamp: Not Supported 00:41:24.973 Copy: Not Supported 00:41:24.973 Volatile Write Cache: Present 00:41:24.973 Atomic Write Unit (Normal): 1 00:41:24.973 Atomic Write Unit (PFail): 1 00:41:24.973 Atomic Compare & Write Unit: 1 00:41:24.973 Fused Compare & Write: Not Supported 00:41:24.973 Scatter-Gather List 00:41:24.973 SGL Command Set: Supported 00:41:24.973 SGL Keyed: Not Supported 00:41:24.973 SGL Bit Bucket Descriptor: Not Supported 00:41:24.973 SGL Metadata Pointer: Not Supported 00:41:24.973 Oversized SGL: Not Supported 00:41:24.973 SGL Metadata Address: Not Supported 00:41:24.973 SGL Offset: Supported 00:41:24.973 Transport SGL Data Block: Not Supported 00:41:24.973 Replay Protected Memory Block: Not Supported 00:41:24.973 00:41:24.973 Firmware Slot Information 00:41:24.973 ========================= 00:41:24.973 Active slot: 0 00:41:24.973 00:41:24.973 Asymmetric Namespace Access 00:41:24.973 =========================== 00:41:24.973 Change Count : 0 00:41:24.973 Number of ANA Group Descriptors : 1 00:41:24.973 ANA Group Descriptor : 0 00:41:24.973 ANA Group ID : 1 00:41:24.973 Number of NSID Values : 1 00:41:24.973 Change Count : 0 00:41:24.973 ANA State : 1 00:41:24.973 Namespace Identifier : 1 00:41:24.973 00:41:24.973 Commands Supported and Effects 00:41:24.973 ============================== 00:41:24.973 Admin Commands 00:41:24.973 -------------- 00:41:24.973 Get Log Page (02h): Supported 00:41:24.973 Identify (06h): Supported 00:41:24.973 Abort (08h): Supported 00:41:24.973 Set Features (09h): Supported 00:41:24.973 Get Features (0Ah): Supported 00:41:24.973 Asynchronous Event Request (0Ch): Supported 00:41:24.973 Keep Alive (18h): Supported 00:41:24.973 I/O Commands 00:41:24.973 ------------ 00:41:24.973 Flush (00h): Supported 00:41:24.973 Write (01h): Supported LBA-Change 00:41:24.973 Read (02h): Supported 00:41:24.973 Write Zeroes (08h): Supported LBA-Change 00:41:24.973 Dataset Management (09h): Supported 00:41:24.973 00:41:24.973 Error Log 00:41:24.973 ========= 00:41:24.973 Entry: 0 00:41:24.973 Error Count: 0x3 00:41:24.973 Submission Queue Id: 0x0 00:41:24.973 Command Id: 0x5 00:41:24.973 Phase Bit: 0 00:41:24.973 Status Code: 0x2 00:41:24.973 Status Code Type: 0x0 00:41:24.973 Do Not Retry: 1 00:41:24.973 Error Location: 0x28 00:41:24.973 LBA: 0x0 00:41:24.973 Namespace: 0x0 00:41:24.973 Vendor Log Page: 0x0 00:41:24.973 ----------- 00:41:24.973 Entry: 1 00:41:24.973 Error Count: 0x2 00:41:24.973 Submission Queue Id: 0x0 00:41:24.973 Command Id: 0x5 00:41:24.973 Phase Bit: 0 00:41:24.973 Status Code: 0x2 00:41:24.973 Status Code Type: 0x0 00:41:24.973 Do Not Retry: 1 00:41:24.973 Error Location: 0x28 00:41:24.973 LBA: 0x0 00:41:24.973 Namespace: 0x0 00:41:24.973 Vendor Log Page: 0x0 00:41:24.973 ----------- 00:41:24.973 Entry: 2 00:41:24.973 Error Count: 0x1 00:41:24.973 Submission Queue Id: 0x0 00:41:24.973 Command Id: 0x4 00:41:24.973 Phase Bit: 0 00:41:24.974 Status Code: 0x2 00:41:24.974 Status Code Type: 0x0 00:41:24.974 Do Not Retry: 1 00:41:24.974 Error Location: 0x28 00:41:24.974 LBA: 0x0 00:41:24.974 Namespace: 0x0 00:41:24.974 Vendor Log Page: 0x0 00:41:24.974 00:41:24.974 Number of Queues 00:41:24.974 ================ 00:41:24.974 Number of I/O Submission Queues: 128 00:41:24.974 Number of I/O Completion Queues: 128 00:41:24.974 00:41:24.974 ZNS Specific Controller Data 00:41:24.974 ============================ 00:41:24.974 Zone Append Size Limit: 0 00:41:24.974 00:41:24.974 00:41:24.974 Active Namespaces 00:41:24.974 ================= 00:41:24.974 get_feature(0x05) failed 00:41:24.974 Namespace ID:1 00:41:24.974 Command Set Identifier: NVM (00h) 00:41:24.974 Deallocate: Supported 00:41:24.974 Deallocated/Unwritten Error: Not Supported 00:41:24.974 Deallocated Read Value: Unknown 00:41:24.974 Deallocate in Write Zeroes: Not Supported 00:41:24.974 Deallocated Guard Field: 0xFFFF 00:41:24.974 Flush: Supported 00:41:24.974 Reservation: Not Supported 00:41:24.974 Namespace Sharing Capabilities: Multiple Controllers 00:41:24.974 Size (in LBAs): 1310720 (5GiB) 00:41:24.974 Capacity (in LBAs): 1310720 (5GiB) 00:41:24.974 Utilization (in LBAs): 1310720 (5GiB) 00:41:24.974 UUID: 7a5af539-075f-41ef-9dd6-efa322a6a79b 00:41:24.974 Thin Provisioning: Not Supported 00:41:24.974 Per-NS Atomic Units: Yes 00:41:24.974 Atomic Boundary Size (Normal): 0 00:41:24.974 Atomic Boundary Size (PFail): 0 00:41:24.974 Atomic Boundary Offset: 0 00:41:24.974 NGUID/EUI64 Never Reused: No 00:41:24.974 ANA group ID: 1 00:41:24.974 Namespace Write Protected: No 00:41:24.974 Number of LBA Formats: 1 00:41:24.974 Current LBA Format: LBA Format #00 00:41:24.974 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:41:24.974 00:41:24.974 14:02:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:41:24.974 14:02:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:24.974 14:02:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:41:25.234 14:02:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:25.234 14:02:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:41:25.234 14:02:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:25.234 14:02:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:25.234 rmmod nvme_tcp 00:41:25.234 rmmod nvme_fabrics 00:41:25.234 14:02:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:25.234 14:02:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:41:25.234 14:02:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:41:25.235 14:02:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:41:25.235 14:02:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:25.235 14:02:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:25.235 14:02:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:25.235 14:02:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:41:25.235 14:02:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:41:25.235 14:02:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:25.235 14:02:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:41:25.235 14:02:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:25.235 14:02:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:41:25.235 14:02:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:41:25.235 14:02:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:41:25.235 14:02:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:41:25.235 14:02:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:41:25.235 14:02:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:41:25.235 14:02:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:41:25.235 14:02:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:41:25.235 14:02:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:41:25.235 14:02:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:41:25.235 14:02:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:41:25.495 14:02:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:41:25.495 14:02:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:41:25.495 14:02:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:41:25.495 14:02:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:41:25.495 14:02:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:25.495 14:02:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:25.495 14:02:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:25.495 14:02:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:41:25.495 14:02:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:41:25.495 14:02:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:41:25.495 14:02:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:41:25.495 14:02:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:41:25.495 14:02:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:41:25.495 14:02:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:41:25.495 14:02:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:41:25.495 14:02:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:41:25.495 14:02:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:41:25.495 14:02:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:41:26.435 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:41:26.435 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:41:26.435 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:41:26.435 00:41:26.435 real 0m3.751s 00:41:26.435 user 0m1.211s 00:41:26.435 sys 0m1.984s 00:41:26.435 14:02:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:26.435 14:02:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:41:26.435 ************************************ 00:41:26.435 END TEST nvmf_identify_kernel_target 00:41:26.435 ************************************ 00:41:26.695 14:02:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:41:26.695 14:02:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:41:26.695 14:02:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:26.695 14:02:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:41:26.695 ************************************ 00:41:26.695 START TEST nvmf_auth_host 00:41:26.695 ************************************ 00:41:26.695 14:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:41:26.695 * Looking for test storage... 00:41:26.695 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:41:26.695 14:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:41:26.695 14:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:41:26.695 14:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:41:26.956 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:41:26.956 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:26.956 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:26.956 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:26.956 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:41:26.956 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:41:26.956 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:41:26.956 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:41:26.956 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:41:26.956 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:41:26.956 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:41:26.956 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:26.956 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:41:26.956 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:41:26.956 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:26.956 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:26.956 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:41:26.956 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:41:26.956 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:26.956 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:41:26.956 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:41:26.956 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:41:26.956 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:41:26.956 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:26.956 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:41:26.956 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:41:26.956 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:26.956 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:26.956 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:41:26.956 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:26.956 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:41:26.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:26.956 --rc genhtml_branch_coverage=1 00:41:26.956 --rc genhtml_function_coverage=1 00:41:26.956 --rc genhtml_legend=1 00:41:26.956 --rc geninfo_all_blocks=1 00:41:26.956 --rc geninfo_unexecuted_blocks=1 00:41:26.956 00:41:26.956 ' 00:41:26.956 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:41:26.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:26.956 --rc genhtml_branch_coverage=1 00:41:26.956 --rc genhtml_function_coverage=1 00:41:26.956 --rc genhtml_legend=1 00:41:26.956 --rc geninfo_all_blocks=1 00:41:26.956 --rc geninfo_unexecuted_blocks=1 00:41:26.956 00:41:26.956 ' 00:41:26.956 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:41:26.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:26.956 --rc genhtml_branch_coverage=1 00:41:26.956 --rc genhtml_function_coverage=1 00:41:26.956 --rc genhtml_legend=1 00:41:26.956 --rc geninfo_all_blocks=1 00:41:26.956 --rc geninfo_unexecuted_blocks=1 00:41:26.956 00:41:26.956 ' 00:41:26.956 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:41:26.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:26.956 --rc genhtml_branch_coverage=1 00:41:26.956 --rc genhtml_function_coverage=1 00:41:26.956 --rc genhtml_legend=1 00:41:26.956 --rc geninfo_all_blocks=1 00:41:26.956 --rc geninfo_unexecuted_blocks=1 00:41:26.956 00:41:26.956 ' 00:41:26.956 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:41:26.956 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:41:26.956 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:26.956 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:26.956 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:26.956 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:26.956 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:26.956 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:26.956 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:26.956 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:26.956 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:26.956 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:26.956 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:41:26.956 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=105ec898-1662-46bd-85be-b241e399edb9 00:41:26.956 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:26.956 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:26.956 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:41:26.956 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:26.956 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:41:26.956 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:41:26.956 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:26.956 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:26.956 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:26.957 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:41:26.957 Cannot find device "nvmf_init_br" 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:41:26.957 Cannot find device "nvmf_init_br2" 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:41:26.957 Cannot find device "nvmf_tgt_br" 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:41:26.957 Cannot find device "nvmf_tgt_br2" 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:41:26.957 Cannot find device "nvmf_init_br" 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:41:26.957 Cannot find device "nvmf_init_br2" 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:41:26.957 Cannot find device "nvmf_tgt_br" 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:41:26.957 Cannot find device "nvmf_tgt_br2" 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:41:26.957 Cannot find device "nvmf_br" 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:41:26.957 Cannot find device "nvmf_init_if" 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:41:26.957 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:41:27.217 Cannot find device "nvmf_init_if2" 00:41:27.217 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:41:27.217 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:41:27.217 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:41:27.217 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:41:27.217 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:41:27.217 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:41:27.217 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:41:27.217 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:41:27.217 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:41:27.217 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:41:27.217 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:41:27.218 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:41:27.218 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:41:27.218 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:41:27.218 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:41:27.218 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:41:27.218 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:41:27.218 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:41:27.218 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:41:27.218 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:41:27.218 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:41:27.218 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:41:27.218 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:41:27.218 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:41:27.218 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:41:27.218 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:41:27.218 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:41:27.218 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:41:27.218 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:41:27.218 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:41:27.218 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:41:27.218 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:41:27.218 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:41:27.218 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:41:27.218 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:41:27.218 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:41:27.218 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:41:27.218 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:41:27.218 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:41:27.218 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:41:27.478 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:41:27.478 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.138 ms 00:41:27.478 00:41:27.478 --- 10.0.0.3 ping statistics --- 00:41:27.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:27.478 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:41:27.478 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:41:27.478 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:41:27.478 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.110 ms 00:41:27.478 00:41:27.478 --- 10.0.0.4 ping statistics --- 00:41:27.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:27.478 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:41:27.478 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:41:27.478 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:27.478 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:41:27.478 00:41:27.478 --- 10.0.0.1 ping statistics --- 00:41:27.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:27.478 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:41:27.478 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:41:27.478 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:27.478 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:41:27.478 00:41:27.478 --- 10.0.0.2 ping statistics --- 00:41:27.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:27.478 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:41:27.478 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:27.478 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:41:27.478 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:27.478 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:27.478 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:27.478 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:27.478 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:27.478 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:27.478 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:27.478 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:41:27.478 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:27.478 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:27.478 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:27.478 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=78366 00:41:27.478 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:41:27.478 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 78366 00:41:27.478 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 78366 ']' 00:41:27.478 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:27.478 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:27.478 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:27.478 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:27.478 14:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:28.419 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:28.419 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:41:28.419 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:28.419 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:28.419 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:28.419 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:28.419 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:41:28.419 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:41:28.419 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:41:28.419 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:41:28.419 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:41:28.419 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:41:28.419 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:41:28.419 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:41:28.419 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2ec9b5a4e577d4685cf42739fa657c87 00:41:28.419 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:41:28.419 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.1aB 00:41:28.419 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2ec9b5a4e577d4685cf42739fa657c87 0 00:41:28.419 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2ec9b5a4e577d4685cf42739fa657c87 0 00:41:28.419 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:41:28.419 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:41:28.419 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2ec9b5a4e577d4685cf42739fa657c87 00:41:28.419 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:41:28.419 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:41:28.419 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.1aB 00:41:28.419 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.1aB 00:41:28.419 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.1aB 00:41:28.419 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:41:28.419 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:41:28.419 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:41:28.419 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:41:28.419 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:41:28.419 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:41:28.419 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:41:28.419 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ce769cd3c050f25e2ab56b2750b6243f06d69c98c68e4b44a42932d8cf4ffe9f 00:41:28.419 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:41:28.419 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.dGV 00:41:28.419 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ce769cd3c050f25e2ab56b2750b6243f06d69c98c68e4b44a42932d8cf4ffe9f 3 00:41:28.419 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ce769cd3c050f25e2ab56b2750b6243f06d69c98c68e4b44a42932d8cf4ffe9f 3 00:41:28.419 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:41:28.419 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:41:28.419 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ce769cd3c050f25e2ab56b2750b6243f06d69c98c68e4b44a42932d8cf4ffe9f 00:41:28.419 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:41:28.419 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:41:28.680 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.dGV 00:41:28.680 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.dGV 00:41:28.680 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.dGV 00:41:28.680 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:41:28.680 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:41:28.680 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:41:28.680 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:41:28.680 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:41:28.680 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:41:28.680 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:41:28.680 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b5e1ad0b05d93577d6d1eb6396056895c78afe6f11afeca4 00:41:28.680 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:41:28.680 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.xqJ 00:41:28.680 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b5e1ad0b05d93577d6d1eb6396056895c78afe6f11afeca4 0 00:41:28.680 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b5e1ad0b05d93577d6d1eb6396056895c78afe6f11afeca4 0 00:41:28.680 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:41:28.680 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:41:28.680 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b5e1ad0b05d93577d6d1eb6396056895c78afe6f11afeca4 00:41:28.680 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:41:28.680 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:41:28.680 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.xqJ 00:41:28.680 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.xqJ 00:41:28.680 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.xqJ 00:41:28.680 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:41:28.680 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:41:28.680 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:41:28.680 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:41:28.680 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:41:28.680 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:41:28.680 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:41:28.680 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f688fc9e3bd2e351120e8a38bb38c4db4f816acdf11b3198 00:41:28.680 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:41:28.680 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.VDH 00:41:28.680 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f688fc9e3bd2e351120e8a38bb38c4db4f816acdf11b3198 2 00:41:28.680 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f688fc9e3bd2e351120e8a38bb38c4db4f816acdf11b3198 2 00:41:28.680 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:41:28.680 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:41:28.680 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f688fc9e3bd2e351120e8a38bb38c4db4f816acdf11b3198 00:41:28.680 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:41:28.680 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:41:28.680 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.VDH 00:41:28.680 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.VDH 00:41:28.680 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.VDH 00:41:28.680 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:41:28.680 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:41:28.680 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:41:28.680 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:41:28.680 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:41:28.680 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:41:28.680 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:41:28.680 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8509cd4b59f510947a578c7bf7d16651 00:41:28.680 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:41:28.680 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.hyW 00:41:28.680 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8509cd4b59f510947a578c7bf7d16651 1 00:41:28.680 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8509cd4b59f510947a578c7bf7d16651 1 00:41:28.680 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:41:28.680 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:41:28.680 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8509cd4b59f510947a578c7bf7d16651 00:41:28.680 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:41:28.680 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:41:28.680 14:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.hyW 00:41:28.941 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.hyW 00:41:28.941 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.hyW 00:41:28.941 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:41:28.941 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:41:28.941 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:41:28.941 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:41:28.941 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:41:28.941 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:41:28.941 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:41:28.941 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=acd4dfa5b6fba5b576e96a8b13dfa27e 00:41:28.941 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:41:28.941 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.jzF 00:41:28.941 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key acd4dfa5b6fba5b576e96a8b13dfa27e 1 00:41:28.941 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 acd4dfa5b6fba5b576e96a8b13dfa27e 1 00:41:28.941 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:41:28.941 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:41:28.941 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=acd4dfa5b6fba5b576e96a8b13dfa27e 00:41:28.941 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:41:28.941 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:41:28.941 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.jzF 00:41:28.941 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.jzF 00:41:28.941 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.jzF 00:41:28.941 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:41:28.941 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:41:28.941 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:41:28.941 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:41:28.941 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:41:28.941 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:41:28.941 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:41:28.941 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3a96dbf37c8a5c27481f26292fc7c968a4b7c247e3f3ef2b 00:41:28.941 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:41:28.941 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.riG 00:41:28.941 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3a96dbf37c8a5c27481f26292fc7c968a4b7c247e3f3ef2b 2 00:41:28.941 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3a96dbf37c8a5c27481f26292fc7c968a4b7c247e3f3ef2b 2 00:41:28.941 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:41:28.941 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:41:28.941 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3a96dbf37c8a5c27481f26292fc7c968a4b7c247e3f3ef2b 00:41:28.941 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:41:28.941 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:41:28.941 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.riG 00:41:28.941 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.riG 00:41:28.941 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.riG 00:41:28.941 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:41:28.941 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:41:28.941 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:41:28.941 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:41:28.941 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:41:28.941 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:41:28.941 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:41:28.941 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7d7d1d3fca031d43580f5de022fcb698 00:41:28.941 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:41:28.941 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.M5f 00:41:28.941 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7d7d1d3fca031d43580f5de022fcb698 0 00:41:28.941 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7d7d1d3fca031d43580f5de022fcb698 0 00:41:28.941 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:41:28.941 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:41:28.941 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7d7d1d3fca031d43580f5de022fcb698 00:41:28.941 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:41:28.941 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:41:28.942 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.M5f 00:41:28.942 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.M5f 00:41:28.942 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.M5f 00:41:28.942 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:41:28.942 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:41:28.942 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:41:28.942 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:41:28.942 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:41:28.942 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:41:28.942 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:41:28.942 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ec197d91785bcf6d5b264a198d26e4632879a49beebd4d85be2307c87976efd7 00:41:28.942 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:41:28.942 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.E6K 00:41:28.942 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ec197d91785bcf6d5b264a198d26e4632879a49beebd4d85be2307c87976efd7 3 00:41:28.942 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ec197d91785bcf6d5b264a198d26e4632879a49beebd4d85be2307c87976efd7 3 00:41:28.942 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:41:28.942 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:41:28.942 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ec197d91785bcf6d5b264a198d26e4632879a49beebd4d85be2307c87976efd7 00:41:28.942 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:41:28.942 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:41:29.202 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.E6K 00:41:29.202 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.E6K 00:41:29.202 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.E6K 00:41:29.202 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:41:29.202 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 78366 00:41:29.202 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 78366 ']' 00:41:29.202 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:29.202 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:29.202 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:29.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:29.202 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:29.202 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.1aB 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.dGV ]] 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.dGV 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.xqJ 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.VDH ]] 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.VDH 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.hyW 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.jzF ]] 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.jzF 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.riG 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.M5f ]] 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.M5f 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.E6K 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:41:29.464 14:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:41:30.078 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:41:30.078 Waiting for block devices as requested 00:41:30.078 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:41:30.078 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:41:31.017 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:41:31.017 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:41:31.017 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:41:31.017 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:41:31.017 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:41:31.017 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:41:31.017 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:41:31.017 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:41:31.017 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:41:31.017 No valid GPT data, bailing 00:41:31.017 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:41:31.017 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:41:31.017 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:41:31.017 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:41:31.017 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:41:31.017 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:41:31.017 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:41:31.017 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:41:31.017 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:41:31.017 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:41:31.017 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:41:31.017 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:41:31.017 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:41:31.017 No valid GPT data, bailing 00:41:31.017 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:41:31.017 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:41:31.017 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:41:31.017 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:41:31.017 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:41:31.017 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:41:31.017 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:41:31.017 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:41:31.017 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:41:31.017 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:41:31.017 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:41:31.017 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:41:31.017 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:41:31.277 No valid GPT data, bailing 00:41:31.277 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:41:31.277 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:41:31.278 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:41:31.278 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:41:31.278 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:41:31.278 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:41:31.278 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:41:31.278 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:41:31.278 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:41:31.278 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:41:31.278 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:41:31.278 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:41:31.278 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:41:31.278 No valid GPT data, bailing 00:41:31.278 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:41:31.278 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:41:31.278 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:41:31.278 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:41:31.278 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:41:31.278 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:41:31.278 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:41:31.278 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:41:31.278 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:41:31.278 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:41:31.278 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:41:31.278 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:41:31.278 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:41:31.278 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:41:31.278 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:41:31.278 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:41:31.278 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:41:31.278 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid=105ec898-1662-46bd-85be-b241e399edb9 -a 10.0.0.1 -t tcp -s 4420 00:41:31.278 00:41:31.278 Discovery Log Number of Records 2, Generation counter 2 00:41:31.278 =====Discovery Log Entry 0====== 00:41:31.278 trtype: tcp 00:41:31.278 adrfam: ipv4 00:41:31.278 subtype: current discovery subsystem 00:41:31.278 treq: not specified, sq flow control disable supported 00:41:31.278 portid: 1 00:41:31.278 trsvcid: 4420 00:41:31.278 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:41:31.278 traddr: 10.0.0.1 00:41:31.278 eflags: none 00:41:31.278 sectype: none 00:41:31.278 =====Discovery Log Entry 1====== 00:41:31.278 trtype: tcp 00:41:31.278 adrfam: ipv4 00:41:31.278 subtype: nvme subsystem 00:41:31.278 treq: not specified, sq flow control disable supported 00:41:31.278 portid: 1 00:41:31.278 trsvcid: 4420 00:41:31.278 subnqn: nqn.2024-02.io.spdk:cnode0 00:41:31.278 traddr: 10.0.0.1 00:41:31.278 eflags: none 00:41:31.278 sectype: none 00:41:31.278 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:41:31.278 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:41:31.278 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:41:31.278 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:41:31.278 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:31.278 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:31.278 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:31.278 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:31.278 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjVlMWFkMGIwNWQ5MzU3N2Q2ZDFlYjYzOTYwNTY4OTVjNzhhZmU2ZjExYWZlY2E08OFetQ==: 00:41:31.278 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjY4OGZjOWUzYmQyZTM1MTEyMGU4YTM4YmIzOGM0ZGI0ZjgxNmFjZGYxMWIzMTk4UrOiKg==: 00:41:31.278 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:31.278 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:31.538 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjVlMWFkMGIwNWQ5MzU3N2Q2ZDFlYjYzOTYwNTY4OTVjNzhhZmU2ZjExYWZlY2E08OFetQ==: 00:41:31.538 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjY4OGZjOWUzYmQyZTM1MTEyMGU4YTM4YmIzOGM0ZGI0ZjgxNmFjZGYxMWIzMTk4UrOiKg==: ]] 00:41:31.538 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjY4OGZjOWUzYmQyZTM1MTEyMGU4YTM4YmIzOGM0ZGI0ZjgxNmFjZGYxMWIzMTk4UrOiKg==: 00:41:31.538 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:41:31.538 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:41:31.538 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:41:31.538 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:41:31.538 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:41:31.538 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:31.538 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:41:31.538 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:41:31.538 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:31.538 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:31.538 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:41:31.538 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:31.539 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:31.539 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:31.539 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:31.539 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:31.539 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:31.539 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:31.539 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:31.539 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:31.539 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:31.539 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:31.539 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:31.539 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:31.539 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:31.539 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:31.539 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:31.539 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:31.539 nvme0n1 00:41:31.539 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:31.539 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:31.539 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:31.539 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:31.539 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:31.539 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:31.539 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:31.539 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:31.539 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:31.539 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:31.539 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:31.539 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:41:31.539 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:41:31.539 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:31.539 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:41:31.539 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:31.539 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:31.539 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:31.539 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:41:31.539 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVjOWI1YTRlNTc3ZDQ2ODVjZjQyNzM5ZmE2NTdjODeCMRnM: 00:41:31.539 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2U3NjljZDNjMDUwZjI1ZTJhYjU2YjI3NTBiNjI0M2YwNmQ2OWM5OGM2OGU0YjQ0YTQyOTMyZDhjZjRmZmU5Zrgj/jo=: 00:41:31.539 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:31.539 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:31.539 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVjOWI1YTRlNTc3ZDQ2ODVjZjQyNzM5ZmE2NTdjODeCMRnM: 00:41:31.539 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2U3NjljZDNjMDUwZjI1ZTJhYjU2YjI3NTBiNjI0M2YwNmQ2OWM5OGM2OGU0YjQ0YTQyOTMyZDhjZjRmZmU5Zrgj/jo=: ]] 00:41:31.539 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2U3NjljZDNjMDUwZjI1ZTJhYjU2YjI3NTBiNjI0M2YwNmQ2OWM5OGM2OGU0YjQ0YTQyOTMyZDhjZjRmZmU5Zrgj/jo=: 00:41:31.539 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:41:31.539 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:31.539 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:31.539 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:41:31.539 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:41:31.539 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:31.539 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:41:31.539 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:31.539 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:31.539 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:31.539 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:31.539 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:31.539 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:31.539 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:31.539 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:31.539 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:31.539 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:31.539 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:31.539 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:31.539 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:31.539 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:31.539 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:31.539 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:31.539 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:31.799 nvme0n1 00:41:31.799 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:31.799 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:31.799 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:31.799 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:31.799 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:31.799 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:31.799 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:31.799 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:31.799 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:31.799 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:31.799 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:31.799 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:31.799 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:41:31.799 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:31.799 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:31.799 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:31.799 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:31.799 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjVlMWFkMGIwNWQ5MzU3N2Q2ZDFlYjYzOTYwNTY4OTVjNzhhZmU2ZjExYWZlY2E08OFetQ==: 00:41:31.799 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjY4OGZjOWUzYmQyZTM1MTEyMGU4YTM4YmIzOGM0ZGI0ZjgxNmFjZGYxMWIzMTk4UrOiKg==: 00:41:31.799 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:31.799 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:31.799 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjVlMWFkMGIwNWQ5MzU3N2Q2ZDFlYjYzOTYwNTY4OTVjNzhhZmU2ZjExYWZlY2E08OFetQ==: 00:41:31.799 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjY4OGZjOWUzYmQyZTM1MTEyMGU4YTM4YmIzOGM0ZGI0ZjgxNmFjZGYxMWIzMTk4UrOiKg==: ]] 00:41:31.799 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjY4OGZjOWUzYmQyZTM1MTEyMGU4YTM4YmIzOGM0ZGI0ZjgxNmFjZGYxMWIzMTk4UrOiKg==: 00:41:31.799 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:41:31.799 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:31.799 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:31.799 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:41:31.799 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:31.800 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:31.800 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:41:31.800 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:31.800 14:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:31.800 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:31.800 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:31.800 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:31.800 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:31.800 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:31.800 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:31.800 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:31.800 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:31.800 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:31.800 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:31.800 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:31.800 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:31.800 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:31.800 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:31.800 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:31.800 nvme0n1 00:41:31.800 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:31.800 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:31.800 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:31.800 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:31.800 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:31.800 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.059 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:32.059 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:32.059 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.059 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:32.059 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.059 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:32.059 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:41:32.059 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:32.059 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:32.059 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:32.059 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:32.059 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODUwOWNkNGI1OWY1MTA5NDdhNTc4YzdiZjdkMTY2NTEXtW+3: 00:41:32.059 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWNkNGRmYTViNmZiYTViNTc2ZTk2YThiMTNkZmEyN2VWPTM1: 00:41:32.059 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:32.059 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:32.059 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODUwOWNkNGI1OWY1MTA5NDdhNTc4YzdiZjdkMTY2NTEXtW+3: 00:41:32.059 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWNkNGRmYTViNmZiYTViNTc2ZTk2YThiMTNkZmEyN2VWPTM1: ]] 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWNkNGRmYTViNmZiYTViNTc2ZTk2YThiMTNkZmEyN2VWPTM1: 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:32.060 nvme0n1 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2E5NmRiZjM3YzhhNWMyNzQ4MWYyNjI5MmZjN2M5NjhhNGI3YzI0N2UzZjNlZjJighm8Fw==: 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2Q3ZDFkM2ZjYTAzMWQ0MzU4MGY1ZGUwMjJmY2I2OTgUXj2A: 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2E5NmRiZjM3YzhhNWMyNzQ4MWYyNjI5MmZjN2M5NjhhNGI3YzI0N2UzZjNlZjJighm8Fw==: 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2Q3ZDFkM2ZjYTAzMWQ0MzU4MGY1ZGUwMjJmY2I2OTgUXj2A: ]] 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2Q3ZDFkM2ZjYTAzMWQ0MzU4MGY1ZGUwMjJmY2I2OTgUXj2A: 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.060 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:32.319 nvme0n1 00:41:32.319 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.319 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:32.319 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.319 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:32.319 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:32.319 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.319 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:32.319 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:32.319 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.319 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:32.319 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.319 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:32.319 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:41:32.319 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:32.319 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:32.319 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:32.319 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:41:32.319 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWMxOTdkOTE3ODViY2Y2ZDViMjY0YTE5OGQyNmU0NjMyODc5YTQ5YmVlYmQ0ZDg1YmUyMzA3Yzg3OTc2ZWZkN6e8k/E=: 00:41:32.319 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:41:32.319 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:32.319 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:32.319 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWMxOTdkOTE3ODViY2Y2ZDViMjY0YTE5OGQyNmU0NjMyODc5YTQ5YmVlYmQ0ZDg1YmUyMzA3Yzg3OTc2ZWZkN6e8k/E=: 00:41:32.319 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:41:32.319 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:41:32.319 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:32.319 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:32.319 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:41:32.319 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:41:32.319 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:32.319 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:41:32.319 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.319 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:32.319 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.319 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:32.319 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:32.319 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:32.319 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:32.320 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:32.320 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:32.320 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:32.320 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:32.320 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:32.320 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:32.320 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:32.320 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:41:32.320 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.320 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:32.320 nvme0n1 00:41:32.320 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.320 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:32.320 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:32.320 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.320 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:32.580 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.580 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:32.580 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:32.580 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.580 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:32.580 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.580 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:41:32.580 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:32.580 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:41:32.580 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:32.580 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:32.580 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:41:32.580 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:41:32.580 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVjOWI1YTRlNTc3ZDQ2ODVjZjQyNzM5ZmE2NTdjODeCMRnM: 00:41:32.580 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2U3NjljZDNjMDUwZjI1ZTJhYjU2YjI3NTBiNjI0M2YwNmQ2OWM5OGM2OGU0YjQ0YTQyOTMyZDhjZjRmZmU5Zrgj/jo=: 00:41:32.580 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:32.580 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:41:32.840 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVjOWI1YTRlNTc3ZDQ2ODVjZjQyNzM5ZmE2NTdjODeCMRnM: 00:41:32.840 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2U3NjljZDNjMDUwZjI1ZTJhYjU2YjI3NTBiNjI0M2YwNmQ2OWM5OGM2OGU0YjQ0YTQyOTMyZDhjZjRmZmU5Zrgj/jo=: ]] 00:41:32.840 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2U3NjljZDNjMDUwZjI1ZTJhYjU2YjI3NTBiNjI0M2YwNmQ2OWM5OGM2OGU0YjQ0YTQyOTMyZDhjZjRmZmU5Zrgj/jo=: 00:41:32.840 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:41:32.840 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:32.840 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:32.840 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:41:32.840 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:41:32.840 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:32.840 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:41:32.840 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.840 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:32.840 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.840 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:32.840 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:32.840 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:32.840 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:32.840 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:32.840 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:32.840 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:32.840 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:32.840 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:32.840 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:32.840 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:32.840 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:32.840 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.840 14:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:32.840 nvme0n1 00:41:32.840 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.840 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:32.840 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:32.840 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.840 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:32.840 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.840 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:32.840 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:32.840 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.840 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjVlMWFkMGIwNWQ5MzU3N2Q2ZDFlYjYzOTYwNTY4OTVjNzhhZmU2ZjExYWZlY2E08OFetQ==: 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjY4OGZjOWUzYmQyZTM1MTEyMGU4YTM4YmIzOGM0ZGI0ZjgxNmFjZGYxMWIzMTk4UrOiKg==: 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjVlMWFkMGIwNWQ5MzU3N2Q2ZDFlYjYzOTYwNTY4OTVjNzhhZmU2ZjExYWZlY2E08OFetQ==: 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjY4OGZjOWUzYmQyZTM1MTEyMGU4YTM4YmIzOGM0ZGI0ZjgxNmFjZGYxMWIzMTk4UrOiKg==: ]] 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjY4OGZjOWUzYmQyZTM1MTEyMGU4YTM4YmIzOGM0ZGI0ZjgxNmFjZGYxMWIzMTk4UrOiKg==: 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:33.103 nvme0n1 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODUwOWNkNGI1OWY1MTA5NDdhNTc4YzdiZjdkMTY2NTEXtW+3: 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWNkNGRmYTViNmZiYTViNTc2ZTk2YThiMTNkZmEyN2VWPTM1: 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODUwOWNkNGI1OWY1MTA5NDdhNTc4YzdiZjdkMTY2NTEXtW+3: 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWNkNGRmYTViNmZiYTViNTc2ZTk2YThiMTNkZmEyN2VWPTM1: ]] 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWNkNGRmYTViNmZiYTViNTc2ZTk2YThiMTNkZmEyN2VWPTM1: 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:33.103 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:33.364 nvme0n1 00:41:33.364 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.364 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:33.364 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:33.364 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:33.364 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:33.364 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.364 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:33.364 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:33.364 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:33.364 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:33.364 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.364 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:33.364 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:41:33.364 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:33.364 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:33.364 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:41:33.364 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:41:33.364 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2E5NmRiZjM3YzhhNWMyNzQ4MWYyNjI5MmZjN2M5NjhhNGI3YzI0N2UzZjNlZjJighm8Fw==: 00:41:33.364 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2Q3ZDFkM2ZjYTAzMWQ0MzU4MGY1ZGUwMjJmY2I2OTgUXj2A: 00:41:33.364 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:33.364 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:41:33.364 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2E5NmRiZjM3YzhhNWMyNzQ4MWYyNjI5MmZjN2M5NjhhNGI3YzI0N2UzZjNlZjJighm8Fw==: 00:41:33.364 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2Q3ZDFkM2ZjYTAzMWQ0MzU4MGY1ZGUwMjJmY2I2OTgUXj2A: ]] 00:41:33.364 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2Q3ZDFkM2ZjYTAzMWQ0MzU4MGY1ZGUwMjJmY2I2OTgUXj2A: 00:41:33.364 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:41:33.364 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:33.364 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:33.364 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:41:33.364 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:41:33.364 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:33.364 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:41:33.364 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:33.364 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:33.364 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.364 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:33.364 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:33.364 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:33.364 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:33.364 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:33.364 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:33.364 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:33.364 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:33.364 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:33.364 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:33.364 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:33.364 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:41:33.364 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:33.364 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:33.624 nvme0n1 00:41:33.624 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.624 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:33.624 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:33.624 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:33.624 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:33.624 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.624 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:33.624 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:33.624 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:33.624 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:33.624 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.624 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:33.624 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:41:33.624 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:33.624 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:33.624 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:41:33.624 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:41:33.624 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWMxOTdkOTE3ODViY2Y2ZDViMjY0YTE5OGQyNmU0NjMyODc5YTQ5YmVlYmQ0ZDg1YmUyMzA3Yzg3OTc2ZWZkN6e8k/E=: 00:41:33.624 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:41:33.624 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:33.624 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:41:33.624 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWMxOTdkOTE3ODViY2Y2ZDViMjY0YTE5OGQyNmU0NjMyODc5YTQ5YmVlYmQ0ZDg1YmUyMzA3Yzg3OTc2ZWZkN6e8k/E=: 00:41:33.624 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:41:33.624 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:41:33.624 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:33.624 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:33.624 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:41:33.624 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:41:33.624 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:33.624 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:41:33.624 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:33.624 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:33.624 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.624 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:33.624 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:33.624 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:33.624 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:33.624 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:33.624 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:33.624 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:33.624 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:33.625 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:33.625 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:33.625 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:33.625 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:41:33.625 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:33.625 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:33.625 nvme0n1 00:41:33.625 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.625 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:33.625 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:33.625 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:33.625 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:33.885 14:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.885 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:33.885 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:33.885 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:33.885 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:33.885 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.885 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:41:33.885 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:33.885 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:41:33.885 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:33.885 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:33.885 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:41:33.885 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:41:33.885 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVjOWI1YTRlNTc3ZDQ2ODVjZjQyNzM5ZmE2NTdjODeCMRnM: 00:41:33.885 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2U3NjljZDNjMDUwZjI1ZTJhYjU2YjI3NTBiNjI0M2YwNmQ2OWM5OGM2OGU0YjQ0YTQyOTMyZDhjZjRmZmU5Zrgj/jo=: 00:41:33.885 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:33.885 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:41:34.456 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVjOWI1YTRlNTc3ZDQ2ODVjZjQyNzM5ZmE2NTdjODeCMRnM: 00:41:34.456 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2U3NjljZDNjMDUwZjI1ZTJhYjU2YjI3NTBiNjI0M2YwNmQ2OWM5OGM2OGU0YjQ0YTQyOTMyZDhjZjRmZmU5Zrgj/jo=: ]] 00:41:34.456 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2U3NjljZDNjMDUwZjI1ZTJhYjU2YjI3NTBiNjI0M2YwNmQ2OWM5OGM2OGU0YjQ0YTQyOTMyZDhjZjRmZmU5Zrgj/jo=: 00:41:34.456 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:41:34.456 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:34.456 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:34.456 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:41:34.456 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:41:34.456 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:34.456 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:41:34.456 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:34.456 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:34.456 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:34.456 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:34.456 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:34.456 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:34.456 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:34.456 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:34.456 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:34.456 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:34.456 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:34.456 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:34.456 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:34.456 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:34.456 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:34.456 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:34.456 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:34.456 nvme0n1 00:41:34.456 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:34.456 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:34.456 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:34.456 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:34.456 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:34.456 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:34.717 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:34.717 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:34.717 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:34.717 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:34.717 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:34.717 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:34.717 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:41:34.717 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:34.717 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:34.717 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:41:34.717 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:34.717 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjVlMWFkMGIwNWQ5MzU3N2Q2ZDFlYjYzOTYwNTY4OTVjNzhhZmU2ZjExYWZlY2E08OFetQ==: 00:41:34.717 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjY4OGZjOWUzYmQyZTM1MTEyMGU4YTM4YmIzOGM0ZGI0ZjgxNmFjZGYxMWIzMTk4UrOiKg==: 00:41:34.717 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:34.717 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:41:34.717 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjVlMWFkMGIwNWQ5MzU3N2Q2ZDFlYjYzOTYwNTY4OTVjNzhhZmU2ZjExYWZlY2E08OFetQ==: 00:41:34.717 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjY4OGZjOWUzYmQyZTM1MTEyMGU4YTM4YmIzOGM0ZGI0ZjgxNmFjZGYxMWIzMTk4UrOiKg==: ]] 00:41:34.717 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjY4OGZjOWUzYmQyZTM1MTEyMGU4YTM4YmIzOGM0ZGI0ZjgxNmFjZGYxMWIzMTk4UrOiKg==: 00:41:34.717 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:41:34.717 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:34.717 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:34.717 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:41:34.717 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:34.717 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:34.717 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:41:34.717 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:34.717 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:34.717 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:34.717 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:34.717 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:34.717 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:34.717 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:34.717 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:34.717 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:34.717 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:34.717 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:34.717 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:34.717 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:34.717 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:34.717 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:34.717 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:34.717 14:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:34.717 nvme0n1 00:41:34.717 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:34.717 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:34.717 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:34.717 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:34.717 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:34.977 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:34.977 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:34.977 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:34.977 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:34.977 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:34.977 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:34.977 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:34.977 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:41:34.977 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:34.977 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:34.977 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:41:34.977 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:34.977 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODUwOWNkNGI1OWY1MTA5NDdhNTc4YzdiZjdkMTY2NTEXtW+3: 00:41:34.977 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWNkNGRmYTViNmZiYTViNTc2ZTk2YThiMTNkZmEyN2VWPTM1: 00:41:34.977 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:34.977 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:41:34.977 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODUwOWNkNGI1OWY1MTA5NDdhNTc4YzdiZjdkMTY2NTEXtW+3: 00:41:34.977 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWNkNGRmYTViNmZiYTViNTc2ZTk2YThiMTNkZmEyN2VWPTM1: ]] 00:41:34.977 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWNkNGRmYTViNmZiYTViNTc2ZTk2YThiMTNkZmEyN2VWPTM1: 00:41:34.977 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:41:34.977 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:34.977 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:34.977 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:41:34.977 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:41:34.977 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:34.977 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:41:34.977 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:34.977 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:34.977 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:34.977 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:34.977 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:34.977 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:34.977 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:34.977 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:34.977 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:34.977 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:34.978 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:34.978 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:34.978 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:34.978 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:34.978 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:34.978 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:34.978 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:34.978 nvme0n1 00:41:34.978 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:34.978 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:34.978 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:34.978 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:34.978 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:35.238 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.238 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:35.238 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:35.238 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.238 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:35.238 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.238 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:35.238 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:41:35.238 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:35.238 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:35.238 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:41:35.238 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:41:35.238 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2E5NmRiZjM3YzhhNWMyNzQ4MWYyNjI5MmZjN2M5NjhhNGI3YzI0N2UzZjNlZjJighm8Fw==: 00:41:35.238 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2Q3ZDFkM2ZjYTAzMWQ0MzU4MGY1ZGUwMjJmY2I2OTgUXj2A: 00:41:35.238 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:35.238 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:41:35.238 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2E5NmRiZjM3YzhhNWMyNzQ4MWYyNjI5MmZjN2M5NjhhNGI3YzI0N2UzZjNlZjJighm8Fw==: 00:41:35.238 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2Q3ZDFkM2ZjYTAzMWQ0MzU4MGY1ZGUwMjJmY2I2OTgUXj2A: ]] 00:41:35.238 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2Q3ZDFkM2ZjYTAzMWQ0MzU4MGY1ZGUwMjJmY2I2OTgUXj2A: 00:41:35.238 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:41:35.238 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:35.238 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:35.238 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:41:35.238 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:41:35.238 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:35.238 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:41:35.238 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.238 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:35.238 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.238 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:35.238 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:35.238 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:35.238 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:35.238 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:35.238 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:35.238 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:35.238 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:35.238 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:35.238 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:35.238 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:35.238 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:41:35.238 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.238 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:35.238 nvme0n1 00:41:35.238 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.499 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:35.499 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:35.499 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.499 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:35.499 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.499 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:35.499 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:35.499 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.499 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:35.499 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.499 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:35.499 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:41:35.499 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:35.499 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:35.499 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:41:35.499 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:41:35.499 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWMxOTdkOTE3ODViY2Y2ZDViMjY0YTE5OGQyNmU0NjMyODc5YTQ5YmVlYmQ0ZDg1YmUyMzA3Yzg3OTc2ZWZkN6e8k/E=: 00:41:35.499 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:41:35.499 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:35.499 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:41:35.499 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWMxOTdkOTE3ODViY2Y2ZDViMjY0YTE5OGQyNmU0NjMyODc5YTQ5YmVlYmQ0ZDg1YmUyMzA3Yzg3OTc2ZWZkN6e8k/E=: 00:41:35.499 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:41:35.499 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:41:35.499 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:35.499 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:35.499 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:41:35.499 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:41:35.499 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:35.499 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:41:35.499 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.499 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:35.499 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.499 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:35.499 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:35.499 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:35.499 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:35.499 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:35.499 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:35.499 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:35.499 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:35.499 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:35.499 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:35.499 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:35.499 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:41:35.499 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.499 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:35.759 nvme0n1 00:41:35.759 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.759 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:35.759 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:35.759 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.759 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:35.759 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.759 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:35.759 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:35.759 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.759 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:35.759 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.759 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:41:35.759 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:35.759 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:41:35.759 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:35.759 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:35.759 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:41:35.759 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:41:35.759 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVjOWI1YTRlNTc3ZDQ2ODVjZjQyNzM5ZmE2NTdjODeCMRnM: 00:41:35.759 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2U3NjljZDNjMDUwZjI1ZTJhYjU2YjI3NTBiNjI0M2YwNmQ2OWM5OGM2OGU0YjQ0YTQyOTMyZDhjZjRmZmU5Zrgj/jo=: 00:41:35.759 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:35.759 14:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVjOWI1YTRlNTc3ZDQ2ODVjZjQyNzM5ZmE2NTdjODeCMRnM: 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2U3NjljZDNjMDUwZjI1ZTJhYjU2YjI3NTBiNjI0M2YwNmQ2OWM5OGM2OGU0YjQ0YTQyOTMyZDhjZjRmZmU5Zrgj/jo=: ]] 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2U3NjljZDNjMDUwZjI1ZTJhYjU2YjI3NTBiNjI0M2YwNmQ2OWM5OGM2OGU0YjQ0YTQyOTMyZDhjZjRmZmU5Zrgj/jo=: 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:37.671 nvme0n1 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjVlMWFkMGIwNWQ5MzU3N2Q2ZDFlYjYzOTYwNTY4OTVjNzhhZmU2ZjExYWZlY2E08OFetQ==: 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjY4OGZjOWUzYmQyZTM1MTEyMGU4YTM4YmIzOGM0ZGI0ZjgxNmFjZGYxMWIzMTk4UrOiKg==: 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjVlMWFkMGIwNWQ5MzU3N2Q2ZDFlYjYzOTYwNTY4OTVjNzhhZmU2ZjExYWZlY2E08OFetQ==: 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjY4OGZjOWUzYmQyZTM1MTEyMGU4YTM4YmIzOGM0ZGI0ZjgxNmFjZGYxMWIzMTk4UrOiKg==: ]] 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjY4OGZjOWUzYmQyZTM1MTEyMGU4YTM4YmIzOGM0ZGI0ZjgxNmFjZGYxMWIzMTk4UrOiKg==: 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:37.671 14:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:38.241 nvme0n1 00:41:38.241 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:38.241 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:38.241 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:38.241 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:38.241 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:38.241 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:38.241 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:38.241 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:38.241 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:38.241 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:38.241 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:38.241 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:38.241 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:41:38.241 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:38.241 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:38.241 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:41:38.241 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:38.241 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODUwOWNkNGI1OWY1MTA5NDdhNTc4YzdiZjdkMTY2NTEXtW+3: 00:41:38.241 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWNkNGRmYTViNmZiYTViNTc2ZTk2YThiMTNkZmEyN2VWPTM1: 00:41:38.241 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:38.241 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:41:38.241 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODUwOWNkNGI1OWY1MTA5NDdhNTc4YzdiZjdkMTY2NTEXtW+3: 00:41:38.241 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWNkNGRmYTViNmZiYTViNTc2ZTk2YThiMTNkZmEyN2VWPTM1: ]] 00:41:38.241 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWNkNGRmYTViNmZiYTViNTc2ZTk2YThiMTNkZmEyN2VWPTM1: 00:41:38.241 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:41:38.241 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:38.241 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:38.241 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:41:38.241 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:41:38.241 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:38.241 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:41:38.241 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:38.241 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:38.241 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:38.241 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:38.241 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:38.241 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:38.241 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:38.241 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:38.241 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:38.241 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:38.241 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:38.241 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:38.241 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:38.241 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:38.241 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:38.241 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:38.241 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:38.501 nvme0n1 00:41:38.501 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:38.501 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:38.501 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:38.501 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:38.501 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:38.501 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:38.501 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:38.501 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:38.501 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:38.501 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:38.501 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:38.501 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:38.501 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:41:38.501 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:38.501 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:38.501 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:41:38.501 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:41:38.501 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2E5NmRiZjM3YzhhNWMyNzQ4MWYyNjI5MmZjN2M5NjhhNGI3YzI0N2UzZjNlZjJighm8Fw==: 00:41:38.501 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2Q3ZDFkM2ZjYTAzMWQ0MzU4MGY1ZGUwMjJmY2I2OTgUXj2A: 00:41:38.501 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:38.501 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:41:38.501 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2E5NmRiZjM3YzhhNWMyNzQ4MWYyNjI5MmZjN2M5NjhhNGI3YzI0N2UzZjNlZjJighm8Fw==: 00:41:38.501 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2Q3ZDFkM2ZjYTAzMWQ0MzU4MGY1ZGUwMjJmY2I2OTgUXj2A: ]] 00:41:38.501 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2Q3ZDFkM2ZjYTAzMWQ0MzU4MGY1ZGUwMjJmY2I2OTgUXj2A: 00:41:38.501 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:41:38.501 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:38.501 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:38.501 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:41:38.501 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:41:38.501 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:38.501 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:41:38.501 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:38.501 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:38.501 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:38.501 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:38.501 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:38.501 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:38.501 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:38.501 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:38.501 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:38.501 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:38.501 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:38.501 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:38.501 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:38.501 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:38.501 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:41:38.501 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:38.501 14:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:39.165 nvme0n1 00:41:39.165 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:39.165 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:39.165 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:39.165 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:39.165 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:39.165 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:39.165 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:39.165 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:39.165 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:39.165 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:39.165 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:39.165 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:39.165 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:41:39.165 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:39.165 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:39.165 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:41:39.165 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:41:39.165 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWMxOTdkOTE3ODViY2Y2ZDViMjY0YTE5OGQyNmU0NjMyODc5YTQ5YmVlYmQ0ZDg1YmUyMzA3Yzg3OTc2ZWZkN6e8k/E=: 00:41:39.165 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:41:39.165 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:39.165 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:41:39.165 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWMxOTdkOTE3ODViY2Y2ZDViMjY0YTE5OGQyNmU0NjMyODc5YTQ5YmVlYmQ0ZDg1YmUyMzA3Yzg3OTc2ZWZkN6e8k/E=: 00:41:39.165 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:41:39.165 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:41:39.165 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:39.165 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:39.165 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:41:39.165 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:41:39.165 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:39.165 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:41:39.165 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:39.165 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:39.165 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:39.165 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:39.165 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:39.165 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:39.165 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:39.165 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:39.165 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:39.165 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:39.165 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:39.165 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:39.165 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:39.165 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:39.165 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:41:39.165 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:39.165 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:39.425 nvme0n1 00:41:39.425 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:39.425 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:39.425 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:39.425 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:39.425 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:39.425 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:39.425 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:39.425 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:39.425 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:39.425 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:39.425 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:39.425 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:41:39.425 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:39.425 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:41:39.425 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:39.425 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:39.425 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:41:39.425 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:41:39.425 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVjOWI1YTRlNTc3ZDQ2ODVjZjQyNzM5ZmE2NTdjODeCMRnM: 00:41:39.425 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2U3NjljZDNjMDUwZjI1ZTJhYjU2YjI3NTBiNjI0M2YwNmQ2OWM5OGM2OGU0YjQ0YTQyOTMyZDhjZjRmZmU5Zrgj/jo=: 00:41:39.425 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:39.425 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:41:39.425 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVjOWI1YTRlNTc3ZDQ2ODVjZjQyNzM5ZmE2NTdjODeCMRnM: 00:41:39.425 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2U3NjljZDNjMDUwZjI1ZTJhYjU2YjI3NTBiNjI0M2YwNmQ2OWM5OGM2OGU0YjQ0YTQyOTMyZDhjZjRmZmU5Zrgj/jo=: ]] 00:41:39.425 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2U3NjljZDNjMDUwZjI1ZTJhYjU2YjI3NTBiNjI0M2YwNmQ2OWM5OGM2OGU0YjQ0YTQyOTMyZDhjZjRmZmU5Zrgj/jo=: 00:41:39.425 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:41:39.425 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:39.426 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:39.426 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:41:39.426 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:41:39.426 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:39.426 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:41:39.426 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:39.426 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:39.426 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:39.426 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:39.426 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:39.426 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:39.426 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:39.426 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:39.426 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:39.426 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:39.426 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:39.426 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:39.426 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:39.426 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:39.426 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:39.426 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:39.426 14:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:39.994 nvme0n1 00:41:39.994 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:39.994 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:39.994 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:39.994 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:39.994 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:39.994 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:39.994 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:39.994 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:39.994 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:39.994 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:39.994 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:39.994 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:39.994 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:41:39.994 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:39.994 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:39.994 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:41:39.994 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:39.994 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjVlMWFkMGIwNWQ5MzU3N2Q2ZDFlYjYzOTYwNTY4OTVjNzhhZmU2ZjExYWZlY2E08OFetQ==: 00:41:39.994 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjY4OGZjOWUzYmQyZTM1MTEyMGU4YTM4YmIzOGM0ZGI0ZjgxNmFjZGYxMWIzMTk4UrOiKg==: 00:41:39.994 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:39.994 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:41:39.994 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjVlMWFkMGIwNWQ5MzU3N2Q2ZDFlYjYzOTYwNTY4OTVjNzhhZmU2ZjExYWZlY2E08OFetQ==: 00:41:39.994 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjY4OGZjOWUzYmQyZTM1MTEyMGU4YTM4YmIzOGM0ZGI0ZjgxNmFjZGYxMWIzMTk4UrOiKg==: ]] 00:41:39.994 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjY4OGZjOWUzYmQyZTM1MTEyMGU4YTM4YmIzOGM0ZGI0ZjgxNmFjZGYxMWIzMTk4UrOiKg==: 00:41:39.994 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:41:39.994 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:39.994 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:39.994 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:41:39.994 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:39.994 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:39.995 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:41:39.995 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:39.995 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:40.254 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:40.254 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:40.254 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:40.254 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:40.254 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:40.254 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:40.254 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:40.254 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:40.254 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:40.254 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:40.254 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:40.254 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:40.254 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:40.254 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:40.254 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:40.822 nvme0n1 00:41:40.822 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:40.822 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:40.822 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:40.822 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:40.822 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:40.822 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:40.822 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:40.822 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:40.822 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:40.822 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:40.822 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:40.822 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:40.822 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:41:40.822 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:40.822 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:40.822 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:41:40.822 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:40.822 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODUwOWNkNGI1OWY1MTA5NDdhNTc4YzdiZjdkMTY2NTEXtW+3: 00:41:40.822 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWNkNGRmYTViNmZiYTViNTc2ZTk2YThiMTNkZmEyN2VWPTM1: 00:41:40.822 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:40.822 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:41:40.822 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODUwOWNkNGI1OWY1MTA5NDdhNTc4YzdiZjdkMTY2NTEXtW+3: 00:41:40.822 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWNkNGRmYTViNmZiYTViNTc2ZTk2YThiMTNkZmEyN2VWPTM1: ]] 00:41:40.822 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWNkNGRmYTViNmZiYTViNTc2ZTk2YThiMTNkZmEyN2VWPTM1: 00:41:40.822 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:41:40.822 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:40.822 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:40.822 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:41:40.822 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:41:40.822 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:40.822 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:41:40.822 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:40.822 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:40.822 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:40.822 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:40.822 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:40.822 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:40.822 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:40.822 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:40.822 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:40.822 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:40.822 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:40.822 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:40.822 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:40.822 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:40.822 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:40.822 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:40.822 14:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:41.392 nvme0n1 00:41:41.392 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:41.392 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:41.392 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:41.392 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:41.392 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:41.392 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:41.392 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:41.392 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:41.392 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:41.392 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:41.392 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:41.392 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:41.392 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:41:41.392 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:41.392 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:41.392 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:41:41.392 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:41:41.392 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2E5NmRiZjM3YzhhNWMyNzQ4MWYyNjI5MmZjN2M5NjhhNGI3YzI0N2UzZjNlZjJighm8Fw==: 00:41:41.392 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2Q3ZDFkM2ZjYTAzMWQ0MzU4MGY1ZGUwMjJmY2I2OTgUXj2A: 00:41:41.392 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:41.392 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:41:41.392 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2E5NmRiZjM3YzhhNWMyNzQ4MWYyNjI5MmZjN2M5NjhhNGI3YzI0N2UzZjNlZjJighm8Fw==: 00:41:41.392 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2Q3ZDFkM2ZjYTAzMWQ0MzU4MGY1ZGUwMjJmY2I2OTgUXj2A: ]] 00:41:41.392 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2Q3ZDFkM2ZjYTAzMWQ0MzU4MGY1ZGUwMjJmY2I2OTgUXj2A: 00:41:41.392 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:41:41.392 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:41.392 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:41.392 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:41:41.392 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:41:41.392 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:41.392 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:41:41.392 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:41.392 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:41.392 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:41.392 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:41.392 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:41.392 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:41.392 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:41.392 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:41.392 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:41.392 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:41.392 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:41.392 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:41.392 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:41.392 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:41.392 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:41:41.392 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:41.392 14:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:41.961 nvme0n1 00:41:41.961 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:41.961 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:41.961 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:41.961 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:41.961 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:41.961 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:41.961 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:41.961 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:41.961 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:41.961 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:41.961 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:41.961 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:41.961 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:41:41.961 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:41.961 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:41.961 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:41:41.961 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:41:41.961 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWMxOTdkOTE3ODViY2Y2ZDViMjY0YTE5OGQyNmU0NjMyODc5YTQ5YmVlYmQ0ZDg1YmUyMzA3Yzg3OTc2ZWZkN6e8k/E=: 00:41:41.961 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:41:41.961 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:41.961 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:41:41.961 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWMxOTdkOTE3ODViY2Y2ZDViMjY0YTE5OGQyNmU0NjMyODc5YTQ5YmVlYmQ0ZDg1YmUyMzA3Yzg3OTc2ZWZkN6e8k/E=: 00:41:41.961 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:41:41.961 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:41:41.961 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:41.961 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:41:41.961 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:41:41.961 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:41:41.961 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:41.961 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:41:41.961 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:41.961 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:41.961 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:42.220 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:42.220 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:42.220 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:42.220 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:42.220 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:42.220 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:42.220 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:42.220 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:42.220 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:42.220 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:42.220 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:42.220 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:41:42.220 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:42.220 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:42.788 nvme0n1 00:41:42.788 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:42.788 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:42.788 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:42.788 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:42.788 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:42.788 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:42.788 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:42.789 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:42.789 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:42.789 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:42.789 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:42.789 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:41:42.789 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:41:42.789 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:42.789 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:41:42.789 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:42.789 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:42.789 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:42.789 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:41:42.789 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVjOWI1YTRlNTc3ZDQ2ODVjZjQyNzM5ZmE2NTdjODeCMRnM: 00:41:42.789 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2U3NjljZDNjMDUwZjI1ZTJhYjU2YjI3NTBiNjI0M2YwNmQ2OWM5OGM2OGU0YjQ0YTQyOTMyZDhjZjRmZmU5Zrgj/jo=: 00:41:42.789 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:42.789 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:42.789 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVjOWI1YTRlNTc3ZDQ2ODVjZjQyNzM5ZmE2NTdjODeCMRnM: 00:41:42.789 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2U3NjljZDNjMDUwZjI1ZTJhYjU2YjI3NTBiNjI0M2YwNmQ2OWM5OGM2OGU0YjQ0YTQyOTMyZDhjZjRmZmU5Zrgj/jo=: ]] 00:41:42.789 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2U3NjljZDNjMDUwZjI1ZTJhYjU2YjI3NTBiNjI0M2YwNmQ2OWM5OGM2OGU0YjQ0YTQyOTMyZDhjZjRmZmU5Zrgj/jo=: 00:41:42.789 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:41:42.789 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:42.789 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:42.789 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:41:42.789 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:41:42.789 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:42.789 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:41:42.789 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:42.789 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:42.789 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:42.789 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:42.789 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:42.789 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:42.789 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:42.789 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:42.789 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:42.789 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:42.789 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:42.789 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:42.789 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:42.789 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:42.789 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:42.789 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:42.789 14:02:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:42.789 nvme0n1 00:41:42.789 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:42.789 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:42.789 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:42.789 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:42.789 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:42.789 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:42.789 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:42.789 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:42.789 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:42.789 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:42.789 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:42.789 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:42.789 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:41:42.789 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:42.789 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:42.789 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:42.789 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:42.789 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjVlMWFkMGIwNWQ5MzU3N2Q2ZDFlYjYzOTYwNTY4OTVjNzhhZmU2ZjExYWZlY2E08OFetQ==: 00:41:42.789 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjY4OGZjOWUzYmQyZTM1MTEyMGU4YTM4YmIzOGM0ZGI0ZjgxNmFjZGYxMWIzMTk4UrOiKg==: 00:41:42.789 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:42.789 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:42.789 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjVlMWFkMGIwNWQ5MzU3N2Q2ZDFlYjYzOTYwNTY4OTVjNzhhZmU2ZjExYWZlY2E08OFetQ==: 00:41:42.789 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjY4OGZjOWUzYmQyZTM1MTEyMGU4YTM4YmIzOGM0ZGI0ZjgxNmFjZGYxMWIzMTk4UrOiKg==: ]] 00:41:42.789 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjY4OGZjOWUzYmQyZTM1MTEyMGU4YTM4YmIzOGM0ZGI0ZjgxNmFjZGYxMWIzMTk4UrOiKg==: 00:41:42.789 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:41:42.789 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:42.789 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:42.789 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:41:42.789 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:42.789 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:42.789 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:41:42.790 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:42.790 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:43.050 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:43.050 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:43.050 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:43.050 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:43.050 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:43.050 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:43.050 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:43.050 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:43.050 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:43.050 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:43.050 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:43.050 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:43.050 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:43.050 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:43.050 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:43.050 nvme0n1 00:41:43.050 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:43.050 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:43.050 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:43.050 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:43.050 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:43.050 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:43.050 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:43.050 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:43.050 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:43.050 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:43.050 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:43.050 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:43.050 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:41:43.050 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:43.050 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:43.050 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:43.050 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:43.050 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODUwOWNkNGI1OWY1MTA5NDdhNTc4YzdiZjdkMTY2NTEXtW+3: 00:41:43.050 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWNkNGRmYTViNmZiYTViNTc2ZTk2YThiMTNkZmEyN2VWPTM1: 00:41:43.050 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:43.050 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:43.050 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODUwOWNkNGI1OWY1MTA5NDdhNTc4YzdiZjdkMTY2NTEXtW+3: 00:41:43.050 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWNkNGRmYTViNmZiYTViNTc2ZTk2YThiMTNkZmEyN2VWPTM1: ]] 00:41:43.050 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWNkNGRmYTViNmZiYTViNTc2ZTk2YThiMTNkZmEyN2VWPTM1: 00:41:43.050 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:41:43.050 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:43.050 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:43.050 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:41:43.050 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:41:43.050 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:43.050 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:41:43.050 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:43.050 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:43.050 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:43.050 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:43.050 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:43.050 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:43.050 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:43.050 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:43.050 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:43.050 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:43.050 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:43.050 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:43.050 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:43.050 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:43.050 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:43.050 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:43.050 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:43.309 nvme0n1 00:41:43.309 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:43.309 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:43.309 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:43.309 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:43.309 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:43.309 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:43.309 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:43.309 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:43.309 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:43.309 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:43.309 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:43.309 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:43.309 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:41:43.309 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:43.309 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:43.309 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:43.309 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:41:43.309 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2E5NmRiZjM3YzhhNWMyNzQ4MWYyNjI5MmZjN2M5NjhhNGI3YzI0N2UzZjNlZjJighm8Fw==: 00:41:43.309 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2Q3ZDFkM2ZjYTAzMWQ0MzU4MGY1ZGUwMjJmY2I2OTgUXj2A: 00:41:43.309 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:43.309 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:43.309 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2E5NmRiZjM3YzhhNWMyNzQ4MWYyNjI5MmZjN2M5NjhhNGI3YzI0N2UzZjNlZjJighm8Fw==: 00:41:43.309 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2Q3ZDFkM2ZjYTAzMWQ0MzU4MGY1ZGUwMjJmY2I2OTgUXj2A: ]] 00:41:43.309 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2Q3ZDFkM2ZjYTAzMWQ0MzU4MGY1ZGUwMjJmY2I2OTgUXj2A: 00:41:43.309 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:41:43.309 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:43.309 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:43.309 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:41:43.309 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:41:43.309 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:43.309 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:41:43.309 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:43.309 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:43.309 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:43.309 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:43.309 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:43.309 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:43.309 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:43.309 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:43.309 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:43.309 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:43.309 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:43.309 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:43.309 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:43.309 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:43.309 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:41:43.309 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:43.309 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:43.309 nvme0n1 00:41:43.309 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:43.309 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:43.309 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:43.310 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:43.310 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:43.310 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:43.310 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:43.310 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:43.310 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:43.310 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:43.569 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:43.569 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:43.569 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:41:43.569 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:43.569 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:43.569 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:43.569 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:41:43.569 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWMxOTdkOTE3ODViY2Y2ZDViMjY0YTE5OGQyNmU0NjMyODc5YTQ5YmVlYmQ0ZDg1YmUyMzA3Yzg3OTc2ZWZkN6e8k/E=: 00:41:43.569 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:41:43.569 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:43.569 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:43.569 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWMxOTdkOTE3ODViY2Y2ZDViMjY0YTE5OGQyNmU0NjMyODc5YTQ5YmVlYmQ0ZDg1YmUyMzA3Yzg3OTc2ZWZkN6e8k/E=: 00:41:43.569 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:41:43.569 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:41:43.569 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:43.569 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:43.569 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:41:43.569 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:41:43.569 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:43.569 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:41:43.569 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:43.569 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:43.569 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:43.569 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:43.569 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:43.569 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:43.569 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:43.569 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:43.569 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:43.569 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:43.569 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:43.569 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:43.569 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:43.569 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:43.569 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:41:43.569 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:43.569 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:43.569 nvme0n1 00:41:43.569 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:43.569 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:43.569 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:43.569 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:43.569 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:43.569 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:43.569 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:43.569 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:43.569 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:43.569 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:43.569 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:43.569 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:41:43.569 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:43.569 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:41:43.569 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:43.569 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:43.569 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:41:43.570 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:41:43.570 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVjOWI1YTRlNTc3ZDQ2ODVjZjQyNzM5ZmE2NTdjODeCMRnM: 00:41:43.570 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2U3NjljZDNjMDUwZjI1ZTJhYjU2YjI3NTBiNjI0M2YwNmQ2OWM5OGM2OGU0YjQ0YTQyOTMyZDhjZjRmZmU5Zrgj/jo=: 00:41:43.570 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:43.570 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:41:43.570 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVjOWI1YTRlNTc3ZDQ2ODVjZjQyNzM5ZmE2NTdjODeCMRnM: 00:41:43.570 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2U3NjljZDNjMDUwZjI1ZTJhYjU2YjI3NTBiNjI0M2YwNmQ2OWM5OGM2OGU0YjQ0YTQyOTMyZDhjZjRmZmU5Zrgj/jo=: ]] 00:41:43.570 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2U3NjljZDNjMDUwZjI1ZTJhYjU2YjI3NTBiNjI0M2YwNmQ2OWM5OGM2OGU0YjQ0YTQyOTMyZDhjZjRmZmU5Zrgj/jo=: 00:41:43.570 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:41:43.570 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:43.570 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:43.570 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:41:43.570 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:41:43.570 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:43.570 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:41:43.570 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:43.570 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:43.570 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:43.570 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:43.570 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:43.570 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:43.570 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:43.570 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:43.570 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:43.570 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:43.570 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:43.570 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:43.570 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:43.570 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:43.570 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:43.570 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:43.570 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:43.830 nvme0n1 00:41:43.830 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:43.830 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:43.830 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:43.830 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:43.830 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:43.830 14:02:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:43.830 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:43.830 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:43.830 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:43.830 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:43.830 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:43.830 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:43.830 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:41:43.830 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:43.830 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:43.830 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:41:43.830 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:43.830 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjVlMWFkMGIwNWQ5MzU3N2Q2ZDFlYjYzOTYwNTY4OTVjNzhhZmU2ZjExYWZlY2E08OFetQ==: 00:41:43.830 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjY4OGZjOWUzYmQyZTM1MTEyMGU4YTM4YmIzOGM0ZGI0ZjgxNmFjZGYxMWIzMTk4UrOiKg==: 00:41:43.830 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:43.830 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:41:43.830 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjVlMWFkMGIwNWQ5MzU3N2Q2ZDFlYjYzOTYwNTY4OTVjNzhhZmU2ZjExYWZlY2E08OFetQ==: 00:41:43.830 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjY4OGZjOWUzYmQyZTM1MTEyMGU4YTM4YmIzOGM0ZGI0ZjgxNmFjZGYxMWIzMTk4UrOiKg==: ]] 00:41:43.830 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjY4OGZjOWUzYmQyZTM1MTEyMGU4YTM4YmIzOGM0ZGI0ZjgxNmFjZGYxMWIzMTk4UrOiKg==: 00:41:43.830 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:41:43.830 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:43.830 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:43.830 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:41:43.830 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:43.830 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:43.830 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:41:43.830 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:43.830 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:43.830 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:43.830 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:43.830 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:43.830 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:43.830 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:43.830 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:43.831 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:43.831 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:43.831 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:43.831 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:43.831 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:43.831 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:43.831 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:43.831 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:43.831 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:44.090 nvme0n1 00:41:44.090 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:44.090 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:44.090 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:44.090 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:44.090 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:44.090 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:44.090 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:44.090 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:44.090 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:44.090 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:44.090 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:44.090 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:44.090 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:41:44.090 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:44.090 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:44.090 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:41:44.090 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:44.090 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODUwOWNkNGI1OWY1MTA5NDdhNTc4YzdiZjdkMTY2NTEXtW+3: 00:41:44.090 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWNkNGRmYTViNmZiYTViNTc2ZTk2YThiMTNkZmEyN2VWPTM1: 00:41:44.090 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:44.090 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:41:44.090 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODUwOWNkNGI1OWY1MTA5NDdhNTc4YzdiZjdkMTY2NTEXtW+3: 00:41:44.091 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWNkNGRmYTViNmZiYTViNTc2ZTk2YThiMTNkZmEyN2VWPTM1: ]] 00:41:44.091 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWNkNGRmYTViNmZiYTViNTc2ZTk2YThiMTNkZmEyN2VWPTM1: 00:41:44.091 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:41:44.091 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:44.091 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:44.091 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:41:44.091 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:41:44.091 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:44.091 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:41:44.091 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:44.091 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:44.091 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:44.091 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:44.091 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:44.091 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:44.091 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:44.091 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:44.091 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:44.091 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:44.091 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:44.091 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:44.091 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:44.091 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:44.091 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:44.091 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:44.091 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:44.091 nvme0n1 00:41:44.091 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:44.091 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:44.091 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:44.091 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:44.091 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:44.091 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:44.351 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:44.351 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2E5NmRiZjM3YzhhNWMyNzQ4MWYyNjI5MmZjN2M5NjhhNGI3YzI0N2UzZjNlZjJighm8Fw==: 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2Q3ZDFkM2ZjYTAzMWQ0MzU4MGY1ZGUwMjJmY2I2OTgUXj2A: 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2E5NmRiZjM3YzhhNWMyNzQ4MWYyNjI5MmZjN2M5NjhhNGI3YzI0N2UzZjNlZjJighm8Fw==: 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2Q3ZDFkM2ZjYTAzMWQ0MzU4MGY1ZGUwMjJmY2I2OTgUXj2A: ]] 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2Q3ZDFkM2ZjYTAzMWQ0MzU4MGY1ZGUwMjJmY2I2OTgUXj2A: 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:44.352 nvme0n1 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWMxOTdkOTE3ODViY2Y2ZDViMjY0YTE5OGQyNmU0NjMyODc5YTQ5YmVlYmQ0ZDg1YmUyMzA3Yzg3OTc2ZWZkN6e8k/E=: 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWMxOTdkOTE3ODViY2Y2ZDViMjY0YTE5OGQyNmU0NjMyODc5YTQ5YmVlYmQ0ZDg1YmUyMzA3Yzg3OTc2ZWZkN6e8k/E=: 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:44.352 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:44.613 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:44.613 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:44.613 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:44.613 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:44.613 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:44.613 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:44.613 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:44.613 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:44.613 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:44.613 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:44.613 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:44.613 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:41:44.613 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:44.613 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:44.613 nvme0n1 00:41:44.613 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:44.613 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:44.613 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:44.613 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:44.613 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:44.613 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:44.613 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:44.613 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:44.613 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:44.613 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:44.613 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:44.613 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:41:44.613 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:44.613 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:41:44.613 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:44.613 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:44.613 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:41:44.613 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:41:44.613 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVjOWI1YTRlNTc3ZDQ2ODVjZjQyNzM5ZmE2NTdjODeCMRnM: 00:41:44.613 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2U3NjljZDNjMDUwZjI1ZTJhYjU2YjI3NTBiNjI0M2YwNmQ2OWM5OGM2OGU0YjQ0YTQyOTMyZDhjZjRmZmU5Zrgj/jo=: 00:41:44.613 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:44.613 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:41:44.613 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVjOWI1YTRlNTc3ZDQ2ODVjZjQyNzM5ZmE2NTdjODeCMRnM: 00:41:44.613 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2U3NjljZDNjMDUwZjI1ZTJhYjU2YjI3NTBiNjI0M2YwNmQ2OWM5OGM2OGU0YjQ0YTQyOTMyZDhjZjRmZmU5Zrgj/jo=: ]] 00:41:44.613 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2U3NjljZDNjMDUwZjI1ZTJhYjU2YjI3NTBiNjI0M2YwNmQ2OWM5OGM2OGU0YjQ0YTQyOTMyZDhjZjRmZmU5Zrgj/jo=: 00:41:44.613 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:41:44.613 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:44.613 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:44.613 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:41:44.613 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:41:44.613 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:44.613 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:41:44.613 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:44.613 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:44.613 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:44.613 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:44.613 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:44.613 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:44.613 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:44.613 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:44.613 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:44.613 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:44.613 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:44.613 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:44.613 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:44.613 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:44.613 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:44.613 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:44.613 14:02:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:44.873 nvme0n1 00:41:44.873 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:44.873 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:44.873 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:44.873 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:44.873 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:44.873 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:44.873 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:44.873 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:44.873 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:44.873 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:44.873 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:44.873 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:44.873 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:41:44.873 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:44.873 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:44.873 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:41:44.873 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:44.873 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjVlMWFkMGIwNWQ5MzU3N2Q2ZDFlYjYzOTYwNTY4OTVjNzhhZmU2ZjExYWZlY2E08OFetQ==: 00:41:44.873 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjY4OGZjOWUzYmQyZTM1MTEyMGU4YTM4YmIzOGM0ZGI0ZjgxNmFjZGYxMWIzMTk4UrOiKg==: 00:41:44.873 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:44.873 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:41:44.873 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjVlMWFkMGIwNWQ5MzU3N2Q2ZDFlYjYzOTYwNTY4OTVjNzhhZmU2ZjExYWZlY2E08OFetQ==: 00:41:44.873 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjY4OGZjOWUzYmQyZTM1MTEyMGU4YTM4YmIzOGM0ZGI0ZjgxNmFjZGYxMWIzMTk4UrOiKg==: ]] 00:41:44.873 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjY4OGZjOWUzYmQyZTM1MTEyMGU4YTM4YmIzOGM0ZGI0ZjgxNmFjZGYxMWIzMTk4UrOiKg==: 00:41:44.873 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:41:44.873 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:44.873 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:44.873 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:41:44.873 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:44.873 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:44.873 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:41:44.873 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:44.873 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:44.873 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:44.873 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:44.873 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:44.873 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:44.873 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:44.873 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:44.873 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:44.873 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:44.873 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:44.873 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:44.873 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:44.874 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:44.874 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:44.874 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:44.874 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:45.133 nvme0n1 00:41:45.133 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:45.133 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:45.133 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:45.133 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:45.133 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:45.133 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:45.133 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:45.133 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:45.133 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:45.133 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:45.133 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:45.133 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:45.134 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:41:45.134 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:45.134 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:45.134 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:41:45.134 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:45.134 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODUwOWNkNGI1OWY1MTA5NDdhNTc4YzdiZjdkMTY2NTEXtW+3: 00:41:45.134 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWNkNGRmYTViNmZiYTViNTc2ZTk2YThiMTNkZmEyN2VWPTM1: 00:41:45.134 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:45.134 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:41:45.134 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODUwOWNkNGI1OWY1MTA5NDdhNTc4YzdiZjdkMTY2NTEXtW+3: 00:41:45.134 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWNkNGRmYTViNmZiYTViNTc2ZTk2YThiMTNkZmEyN2VWPTM1: ]] 00:41:45.134 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWNkNGRmYTViNmZiYTViNTc2ZTk2YThiMTNkZmEyN2VWPTM1: 00:41:45.134 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:41:45.134 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:45.134 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:45.134 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:41:45.134 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:41:45.134 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:45.134 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:41:45.134 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:45.134 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:45.134 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:45.134 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:45.134 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:45.134 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:45.134 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:45.134 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:45.134 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:45.134 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:45.134 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:45.134 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:45.134 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:45.134 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:45.134 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:45.134 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:45.134 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:45.393 nvme0n1 00:41:45.393 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:45.393 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:45.393 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:45.393 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:45.393 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:45.393 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:45.393 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:45.393 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:45.393 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:45.393 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:45.393 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:45.393 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:45.393 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:41:45.393 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:45.393 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:45.393 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:41:45.393 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:41:45.393 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2E5NmRiZjM3YzhhNWMyNzQ4MWYyNjI5MmZjN2M5NjhhNGI3YzI0N2UzZjNlZjJighm8Fw==: 00:41:45.393 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2Q3ZDFkM2ZjYTAzMWQ0MzU4MGY1ZGUwMjJmY2I2OTgUXj2A: 00:41:45.393 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:45.393 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:41:45.393 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2E5NmRiZjM3YzhhNWMyNzQ4MWYyNjI5MmZjN2M5NjhhNGI3YzI0N2UzZjNlZjJighm8Fw==: 00:41:45.393 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2Q3ZDFkM2ZjYTAzMWQ0MzU4MGY1ZGUwMjJmY2I2OTgUXj2A: ]] 00:41:45.393 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2Q3ZDFkM2ZjYTAzMWQ0MzU4MGY1ZGUwMjJmY2I2OTgUXj2A: 00:41:45.393 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:41:45.393 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:45.393 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:45.393 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:41:45.393 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:41:45.393 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:45.393 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:41:45.393 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:45.393 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:45.393 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:45.393 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:45.393 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:45.393 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:45.393 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:45.393 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:45.394 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:45.394 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:45.394 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:45.394 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:45.394 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:45.394 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:45.394 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:41:45.394 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:45.394 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:45.653 nvme0n1 00:41:45.653 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:45.653 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:45.653 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:45.653 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:45.653 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:45.653 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:45.653 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:45.653 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:45.653 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:45.653 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:45.653 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:45.653 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:45.653 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:41:45.653 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:45.653 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:45.653 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:41:45.653 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:41:45.653 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWMxOTdkOTE3ODViY2Y2ZDViMjY0YTE5OGQyNmU0NjMyODc5YTQ5YmVlYmQ0ZDg1YmUyMzA3Yzg3OTc2ZWZkN6e8k/E=: 00:41:45.653 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:41:45.653 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:45.653 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:41:45.653 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWMxOTdkOTE3ODViY2Y2ZDViMjY0YTE5OGQyNmU0NjMyODc5YTQ5YmVlYmQ0ZDg1YmUyMzA3Yzg3OTc2ZWZkN6e8k/E=: 00:41:45.653 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:41:45.653 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:41:45.653 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:45.654 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:45.654 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:41:45.654 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:41:45.654 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:45.654 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:41:45.654 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:45.654 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:45.654 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:45.654 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:45.654 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:45.654 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:45.654 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:45.654 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:45.654 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:45.654 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:45.654 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:45.654 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:45.654 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:45.654 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:45.654 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:41:45.654 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:45.654 14:02:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:45.913 nvme0n1 00:41:45.913 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:45.913 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:45.913 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:45.913 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:45.913 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:45.913 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:45.913 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:45.913 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:45.913 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:45.913 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:45.913 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:45.913 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:41:45.913 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:45.913 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:41:45.913 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:45.913 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:45.913 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:41:45.913 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:41:45.913 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVjOWI1YTRlNTc3ZDQ2ODVjZjQyNzM5ZmE2NTdjODeCMRnM: 00:41:45.913 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2U3NjljZDNjMDUwZjI1ZTJhYjU2YjI3NTBiNjI0M2YwNmQ2OWM5OGM2OGU0YjQ0YTQyOTMyZDhjZjRmZmU5Zrgj/jo=: 00:41:45.913 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:45.913 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:41:45.913 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVjOWI1YTRlNTc3ZDQ2ODVjZjQyNzM5ZmE2NTdjODeCMRnM: 00:41:45.913 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2U3NjljZDNjMDUwZjI1ZTJhYjU2YjI3NTBiNjI0M2YwNmQ2OWM5OGM2OGU0YjQ0YTQyOTMyZDhjZjRmZmU5Zrgj/jo=: ]] 00:41:45.913 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2U3NjljZDNjMDUwZjI1ZTJhYjU2YjI3NTBiNjI0M2YwNmQ2OWM5OGM2OGU0YjQ0YTQyOTMyZDhjZjRmZmU5Zrgj/jo=: 00:41:45.913 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:41:45.913 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:45.913 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:45.913 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:41:45.913 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:41:45.913 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:45.913 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:41:45.913 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:45.913 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:45.913 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:45.913 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:45.913 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:45.913 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:45.913 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:45.913 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:45.913 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:45.913 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:45.913 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:45.913 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:45.913 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:45.913 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:45.913 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:45.913 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:45.913 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:46.172 nvme0n1 00:41:46.172 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:46.172 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:46.172 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:46.172 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:46.172 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:46.430 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:46.430 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:46.430 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:46.430 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:46.430 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:46.430 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:46.430 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:46.430 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:41:46.431 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:46.431 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:46.431 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:41:46.431 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:46.431 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjVlMWFkMGIwNWQ5MzU3N2Q2ZDFlYjYzOTYwNTY4OTVjNzhhZmU2ZjExYWZlY2E08OFetQ==: 00:41:46.431 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjY4OGZjOWUzYmQyZTM1MTEyMGU4YTM4YmIzOGM0ZGI0ZjgxNmFjZGYxMWIzMTk4UrOiKg==: 00:41:46.431 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:46.431 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:41:46.431 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjVlMWFkMGIwNWQ5MzU3N2Q2ZDFlYjYzOTYwNTY4OTVjNzhhZmU2ZjExYWZlY2E08OFetQ==: 00:41:46.431 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjY4OGZjOWUzYmQyZTM1MTEyMGU4YTM4YmIzOGM0ZGI0ZjgxNmFjZGYxMWIzMTk4UrOiKg==: ]] 00:41:46.431 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjY4OGZjOWUzYmQyZTM1MTEyMGU4YTM4YmIzOGM0ZGI0ZjgxNmFjZGYxMWIzMTk4UrOiKg==: 00:41:46.431 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:41:46.431 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:46.431 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:46.431 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:41:46.431 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:46.431 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:46.431 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:41:46.431 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:46.431 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:46.431 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:46.431 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:46.431 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:46.431 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:46.431 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:46.431 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:46.431 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:46.431 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:46.431 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:46.431 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:46.431 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:46.431 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:46.431 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:46.431 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:46.431 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:46.690 nvme0n1 00:41:46.690 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:46.690 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:46.690 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:46.690 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:46.690 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:46.690 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:46.690 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:46.690 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:46.690 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:46.690 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:46.690 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:46.690 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:46.690 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:41:46.690 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:46.690 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:46.690 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:41:46.690 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:46.690 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODUwOWNkNGI1OWY1MTA5NDdhNTc4YzdiZjdkMTY2NTEXtW+3: 00:41:46.690 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWNkNGRmYTViNmZiYTViNTc2ZTk2YThiMTNkZmEyN2VWPTM1: 00:41:46.690 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:46.690 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:41:46.690 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODUwOWNkNGI1OWY1MTA5NDdhNTc4YzdiZjdkMTY2NTEXtW+3: 00:41:46.690 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWNkNGRmYTViNmZiYTViNTc2ZTk2YThiMTNkZmEyN2VWPTM1: ]] 00:41:46.690 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWNkNGRmYTViNmZiYTViNTc2ZTk2YThiMTNkZmEyN2VWPTM1: 00:41:46.690 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:41:46.690 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:46.690 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:46.690 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:41:46.690 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:41:46.690 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:46.690 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:41:46.690 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:46.690 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:46.690 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:46.690 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:46.690 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:46.690 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:46.690 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:46.690 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:46.690 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:46.690 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:46.690 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:46.690 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:46.690 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:46.690 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:46.690 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:46.690 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:46.690 14:02:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:46.948 nvme0n1 00:41:46.948 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:46.948 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:46.948 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:46.948 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:46.948 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:47.207 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:47.207 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:47.207 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:47.207 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:47.207 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:47.207 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:47.207 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:47.207 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:41:47.207 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:47.207 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:47.207 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:41:47.207 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:41:47.207 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2E5NmRiZjM3YzhhNWMyNzQ4MWYyNjI5MmZjN2M5NjhhNGI3YzI0N2UzZjNlZjJighm8Fw==: 00:41:47.207 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2Q3ZDFkM2ZjYTAzMWQ0MzU4MGY1ZGUwMjJmY2I2OTgUXj2A: 00:41:47.207 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:47.207 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:41:47.207 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2E5NmRiZjM3YzhhNWMyNzQ4MWYyNjI5MmZjN2M5NjhhNGI3YzI0N2UzZjNlZjJighm8Fw==: 00:41:47.207 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2Q3ZDFkM2ZjYTAzMWQ0MzU4MGY1ZGUwMjJmY2I2OTgUXj2A: ]] 00:41:47.207 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2Q3ZDFkM2ZjYTAzMWQ0MzU4MGY1ZGUwMjJmY2I2OTgUXj2A: 00:41:47.207 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:41:47.207 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:47.207 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:47.207 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:41:47.207 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:41:47.207 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:47.207 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:41:47.207 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:47.207 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:47.207 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:47.207 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:47.208 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:47.208 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:47.208 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:47.208 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:47.208 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:47.208 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:47.208 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:47.208 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:47.208 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:47.208 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:47.208 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:41:47.208 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:47.208 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:47.466 nvme0n1 00:41:47.466 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:47.466 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:47.467 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:47.467 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:47.467 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:47.467 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:47.467 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:47.467 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:47.467 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:47.467 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:47.467 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:47.467 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:47.467 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:41:47.467 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:47.467 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:47.467 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:41:47.467 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:41:47.467 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWMxOTdkOTE3ODViY2Y2ZDViMjY0YTE5OGQyNmU0NjMyODc5YTQ5YmVlYmQ0ZDg1YmUyMzA3Yzg3OTc2ZWZkN6e8k/E=: 00:41:47.467 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:41:47.467 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:47.467 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:41:47.467 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWMxOTdkOTE3ODViY2Y2ZDViMjY0YTE5OGQyNmU0NjMyODc5YTQ5YmVlYmQ0ZDg1YmUyMzA3Yzg3OTc2ZWZkN6e8k/E=: 00:41:47.467 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:41:47.467 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:41:47.467 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:47.467 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:47.467 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:41:47.467 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:41:47.467 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:47.467 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:41:47.467 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:47.467 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:47.467 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:47.467 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:47.467 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:47.467 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:47.467 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:47.467 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:47.467 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:47.467 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:47.467 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:47.467 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:47.467 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:47.467 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:47.467 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:41:47.467 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:47.467 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:47.727 nvme0n1 00:41:47.727 14:02:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:47.727 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:47.727 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:47.727 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:47.727 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:47.727 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:47.727 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:47.727 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:47.727 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:47.727 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:47.987 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:47.987 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:41:47.987 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:47.987 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:41:47.987 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:47.987 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:47.987 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:41:47.987 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:41:47.987 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVjOWI1YTRlNTc3ZDQ2ODVjZjQyNzM5ZmE2NTdjODeCMRnM: 00:41:47.987 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2U3NjljZDNjMDUwZjI1ZTJhYjU2YjI3NTBiNjI0M2YwNmQ2OWM5OGM2OGU0YjQ0YTQyOTMyZDhjZjRmZmU5Zrgj/jo=: 00:41:47.987 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:47.987 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:41:47.987 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVjOWI1YTRlNTc3ZDQ2ODVjZjQyNzM5ZmE2NTdjODeCMRnM: 00:41:47.987 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2U3NjljZDNjMDUwZjI1ZTJhYjU2YjI3NTBiNjI0M2YwNmQ2OWM5OGM2OGU0YjQ0YTQyOTMyZDhjZjRmZmU5Zrgj/jo=: ]] 00:41:47.987 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2U3NjljZDNjMDUwZjI1ZTJhYjU2YjI3NTBiNjI0M2YwNmQ2OWM5OGM2OGU0YjQ0YTQyOTMyZDhjZjRmZmU5Zrgj/jo=: 00:41:47.987 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:41:47.987 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:47.987 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:47.987 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:41:47.987 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:41:47.987 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:47.987 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:41:47.987 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:47.987 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:47.987 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:47.987 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:47.987 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:47.987 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:47.987 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:47.987 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:47.987 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:47.987 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:47.987 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:47.987 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:47.987 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:47.987 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:47.987 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:47.987 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:47.987 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:48.556 nvme0n1 00:41:48.556 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:48.556 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:48.556 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:48.556 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:48.556 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:48.556 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:48.556 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:48.556 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:48.556 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:48.556 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:48.556 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:48.556 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:48.556 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:41:48.556 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:48.556 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:48.556 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:41:48.556 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:48.556 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjVlMWFkMGIwNWQ5MzU3N2Q2ZDFlYjYzOTYwNTY4OTVjNzhhZmU2ZjExYWZlY2E08OFetQ==: 00:41:48.556 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjY4OGZjOWUzYmQyZTM1MTEyMGU4YTM4YmIzOGM0ZGI0ZjgxNmFjZGYxMWIzMTk4UrOiKg==: 00:41:48.556 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:48.556 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:41:48.556 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjVlMWFkMGIwNWQ5MzU3N2Q2ZDFlYjYzOTYwNTY4OTVjNzhhZmU2ZjExYWZlY2E08OFetQ==: 00:41:48.556 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjY4OGZjOWUzYmQyZTM1MTEyMGU4YTM4YmIzOGM0ZGI0ZjgxNmFjZGYxMWIzMTk4UrOiKg==: ]] 00:41:48.556 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjY4OGZjOWUzYmQyZTM1MTEyMGU4YTM4YmIzOGM0ZGI0ZjgxNmFjZGYxMWIzMTk4UrOiKg==: 00:41:48.556 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:41:48.556 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:48.556 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:48.556 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:41:48.556 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:48.556 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:48.556 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:41:48.556 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:48.556 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:48.556 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:48.556 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:48.556 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:48.556 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:48.556 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:48.556 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:48.556 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:48.556 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:48.556 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:48.556 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:48.556 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:48.556 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:48.556 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:48.556 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:48.556 14:02:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:49.124 nvme0n1 00:41:49.124 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:49.124 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:49.124 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:49.124 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:49.124 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:49.124 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:49.124 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:49.124 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:49.124 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:49.124 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:49.124 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:49.124 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:49.124 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:41:49.124 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:49.124 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:49.124 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:41:49.124 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:49.124 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODUwOWNkNGI1OWY1MTA5NDdhNTc4YzdiZjdkMTY2NTEXtW+3: 00:41:49.124 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWNkNGRmYTViNmZiYTViNTc2ZTk2YThiMTNkZmEyN2VWPTM1: 00:41:49.124 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:49.124 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:41:49.124 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODUwOWNkNGI1OWY1MTA5NDdhNTc4YzdiZjdkMTY2NTEXtW+3: 00:41:49.124 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWNkNGRmYTViNmZiYTViNTc2ZTk2YThiMTNkZmEyN2VWPTM1: ]] 00:41:49.124 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWNkNGRmYTViNmZiYTViNTc2ZTk2YThiMTNkZmEyN2VWPTM1: 00:41:49.124 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:41:49.124 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:49.124 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:49.124 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:41:49.124 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:41:49.124 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:49.124 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:41:49.124 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:49.124 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:49.124 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:49.124 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:49.124 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:49.124 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:49.124 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:49.124 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:49.124 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:49.124 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:49.124 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:49.124 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:49.124 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:49.124 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:49.124 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:49.124 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:49.124 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:49.694 nvme0n1 00:41:49.694 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:49.694 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:49.694 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:49.694 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:49.694 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:49.694 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:49.694 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:49.694 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:49.694 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:49.694 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:49.694 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:49.694 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:49.694 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:41:49.694 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:49.695 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:49.695 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:41:49.695 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:41:49.695 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2E5NmRiZjM3YzhhNWMyNzQ4MWYyNjI5MmZjN2M5NjhhNGI3YzI0N2UzZjNlZjJighm8Fw==: 00:41:49.695 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2Q3ZDFkM2ZjYTAzMWQ0MzU4MGY1ZGUwMjJmY2I2OTgUXj2A: 00:41:49.695 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:49.695 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:41:49.695 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2E5NmRiZjM3YzhhNWMyNzQ4MWYyNjI5MmZjN2M5NjhhNGI3YzI0N2UzZjNlZjJighm8Fw==: 00:41:49.695 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2Q3ZDFkM2ZjYTAzMWQ0MzU4MGY1ZGUwMjJmY2I2OTgUXj2A: ]] 00:41:49.695 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2Q3ZDFkM2ZjYTAzMWQ0MzU4MGY1ZGUwMjJmY2I2OTgUXj2A: 00:41:49.695 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:41:49.695 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:49.695 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:49.695 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:41:49.695 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:41:49.695 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:49.695 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:41:49.695 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:49.695 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:49.695 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:49.695 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:49.695 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:49.695 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:49.695 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:49.695 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:49.695 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:49.695 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:49.695 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:49.695 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:49.695 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:49.695 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:49.695 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:41:49.695 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:49.695 14:02:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:50.263 nvme0n1 00:41:50.263 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:50.263 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:50.263 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:50.263 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:50.263 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:50.263 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:50.263 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:50.263 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:50.263 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:50.263 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:50.263 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:50.263 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:50.263 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:41:50.263 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:50.263 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:41:50.263 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:41:50.263 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:41:50.263 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWMxOTdkOTE3ODViY2Y2ZDViMjY0YTE5OGQyNmU0NjMyODc5YTQ5YmVlYmQ0ZDg1YmUyMzA3Yzg3OTc2ZWZkN6e8k/E=: 00:41:50.263 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:41:50.263 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:41:50.263 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:41:50.263 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWMxOTdkOTE3ODViY2Y2ZDViMjY0YTE5OGQyNmU0NjMyODc5YTQ5YmVlYmQ0ZDg1YmUyMzA3Yzg3OTc2ZWZkN6e8k/E=: 00:41:50.263 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:41:50.263 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:41:50.263 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:50.263 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:41:50.263 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:41:50.263 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:41:50.263 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:50.263 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:41:50.263 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:50.263 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:50.263 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:50.263 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:50.263 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:50.263 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:50.263 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:50.263 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:50.263 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:50.263 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:50.263 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:50.263 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:50.263 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:50.263 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:50.263 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:41:50.263 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:50.263 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:50.832 nvme0n1 00:41:50.832 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:50.832 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:50.832 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:50.832 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:50.832 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:50.832 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:50.832 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:50.832 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:50.832 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:50.832 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:50.832 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:50.832 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:41:50.832 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:41:50.832 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:50.832 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:41:50.832 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:50.832 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:50.832 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:50.832 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:41:50.832 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVjOWI1YTRlNTc3ZDQ2ODVjZjQyNzM5ZmE2NTdjODeCMRnM: 00:41:50.832 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2U3NjljZDNjMDUwZjI1ZTJhYjU2YjI3NTBiNjI0M2YwNmQ2OWM5OGM2OGU0YjQ0YTQyOTMyZDhjZjRmZmU5Zrgj/jo=: 00:41:50.832 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:50.832 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:50.832 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVjOWI1YTRlNTc3ZDQ2ODVjZjQyNzM5ZmE2NTdjODeCMRnM: 00:41:50.832 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2U3NjljZDNjMDUwZjI1ZTJhYjU2YjI3NTBiNjI0M2YwNmQ2OWM5OGM2OGU0YjQ0YTQyOTMyZDhjZjRmZmU5Zrgj/jo=: ]] 00:41:50.832 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2U3NjljZDNjMDUwZjI1ZTJhYjU2YjI3NTBiNjI0M2YwNmQ2OWM5OGM2OGU0YjQ0YTQyOTMyZDhjZjRmZmU5Zrgj/jo=: 00:41:50.832 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:41:50.832 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:50.832 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:50.832 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:41:50.832 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:41:50.832 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:50.833 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:41:50.833 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:50.833 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:50.833 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:50.833 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:50.833 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:50.833 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:50.833 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:50.833 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:50.833 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:50.833 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:50.833 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:50.833 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:50.833 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:50.833 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:50.833 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:50.833 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:50.833 14:02:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:50.833 nvme0n1 00:41:50.833 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:50.833 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:50.833 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:50.833 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:50.833 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:50.833 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:50.833 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:50.833 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:50.833 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:50.833 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:50.833 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:50.833 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:50.833 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:41:50.833 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:50.833 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:50.833 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:50.833 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:50.833 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjVlMWFkMGIwNWQ5MzU3N2Q2ZDFlYjYzOTYwNTY4OTVjNzhhZmU2ZjExYWZlY2E08OFetQ==: 00:41:50.833 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjY4OGZjOWUzYmQyZTM1MTEyMGU4YTM4YmIzOGM0ZGI0ZjgxNmFjZGYxMWIzMTk4UrOiKg==: 00:41:50.833 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:50.833 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:50.833 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjVlMWFkMGIwNWQ5MzU3N2Q2ZDFlYjYzOTYwNTY4OTVjNzhhZmU2ZjExYWZlY2E08OFetQ==: 00:41:50.833 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjY4OGZjOWUzYmQyZTM1MTEyMGU4YTM4YmIzOGM0ZGI0ZjgxNmFjZGYxMWIzMTk4UrOiKg==: ]] 00:41:50.833 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjY4OGZjOWUzYmQyZTM1MTEyMGU4YTM4YmIzOGM0ZGI0ZjgxNmFjZGYxMWIzMTk4UrOiKg==: 00:41:50.833 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:41:50.833 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:50.833 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:50.833 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:41:50.833 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:50.833 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:50.833 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:41:50.833 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:50.833 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:51.093 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:51.093 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:51.093 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:51.093 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:51.093 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:51.093 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:51.093 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:51.093 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:51.093 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:51.093 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:51.093 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:51.093 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:51.093 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:51.093 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:51.093 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:51.093 nvme0n1 00:41:51.093 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:51.093 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:51.093 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:51.093 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:51.093 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:51.093 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:51.093 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:51.093 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:51.093 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:51.093 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:51.093 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:51.093 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:51.093 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:41:51.093 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:51.093 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:51.093 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:51.093 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:51.093 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODUwOWNkNGI1OWY1MTA5NDdhNTc4YzdiZjdkMTY2NTEXtW+3: 00:41:51.093 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWNkNGRmYTViNmZiYTViNTc2ZTk2YThiMTNkZmEyN2VWPTM1: 00:41:51.093 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:51.093 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:51.093 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODUwOWNkNGI1OWY1MTA5NDdhNTc4YzdiZjdkMTY2NTEXtW+3: 00:41:51.093 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWNkNGRmYTViNmZiYTViNTc2ZTk2YThiMTNkZmEyN2VWPTM1: ]] 00:41:51.093 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWNkNGRmYTViNmZiYTViNTc2ZTk2YThiMTNkZmEyN2VWPTM1: 00:41:51.093 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:41:51.093 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:51.093 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:51.093 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:41:51.093 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:41:51.093 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:51.093 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:41:51.093 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:51.093 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:51.093 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:51.093 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:51.093 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:51.093 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:51.093 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:51.093 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:51.093 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:51.093 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:51.093 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:51.093 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:51.093 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:51.093 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:51.093 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:51.093 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:51.093 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:51.353 nvme0n1 00:41:51.353 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:51.353 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:51.353 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:51.353 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:51.353 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:51.353 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:51.353 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:51.353 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:51.353 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:51.353 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:51.353 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:51.353 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:51.353 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:41:51.353 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:51.353 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:51.353 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:51.353 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:41:51.353 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2E5NmRiZjM3YzhhNWMyNzQ4MWYyNjI5MmZjN2M5NjhhNGI3YzI0N2UzZjNlZjJighm8Fw==: 00:41:51.353 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2Q3ZDFkM2ZjYTAzMWQ0MzU4MGY1ZGUwMjJmY2I2OTgUXj2A: 00:41:51.353 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:51.353 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:51.353 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2E5NmRiZjM3YzhhNWMyNzQ4MWYyNjI5MmZjN2M5NjhhNGI3YzI0N2UzZjNlZjJighm8Fw==: 00:41:51.353 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2Q3ZDFkM2ZjYTAzMWQ0MzU4MGY1ZGUwMjJmY2I2OTgUXj2A: ]] 00:41:51.353 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2Q3ZDFkM2ZjYTAzMWQ0MzU4MGY1ZGUwMjJmY2I2OTgUXj2A: 00:41:51.353 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:41:51.353 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:51.353 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:51.353 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:41:51.353 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:41:51.353 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:51.353 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:41:51.353 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:51.353 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:51.353 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:51.353 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:51.353 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:51.353 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:51.353 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:51.353 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:51.353 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:51.353 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:51.353 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:51.353 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:51.353 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:51.353 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:51.353 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:41:51.353 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:51.353 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:51.353 nvme0n1 00:41:51.353 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:51.353 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:51.353 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:51.353 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:51.353 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:51.353 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:51.613 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:51.613 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:51.613 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:51.613 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:51.613 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:51.613 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:51.613 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:41:51.613 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:51.613 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:51.613 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:51.613 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:41:51.613 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWMxOTdkOTE3ODViY2Y2ZDViMjY0YTE5OGQyNmU0NjMyODc5YTQ5YmVlYmQ0ZDg1YmUyMzA3Yzg3OTc2ZWZkN6e8k/E=: 00:41:51.613 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:41:51.613 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:51.613 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:51.613 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWMxOTdkOTE3ODViY2Y2ZDViMjY0YTE5OGQyNmU0NjMyODc5YTQ5YmVlYmQ0ZDg1YmUyMzA3Yzg3OTc2ZWZkN6e8k/E=: 00:41:51.613 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:41:51.613 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:41:51.613 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:51.613 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:51.613 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:41:51.613 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:41:51.613 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:51.613 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:41:51.613 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:51.613 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:51.613 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:51.613 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:51.613 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:51.613 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:51.613 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:51.613 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:51.613 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:51.613 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:51.614 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:51.614 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:51.614 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:51.614 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:51.614 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:41:51.614 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:51.614 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:51.614 nvme0n1 00:41:51.614 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:51.614 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:51.614 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:51.614 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:51.614 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:51.614 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:51.614 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:51.614 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:51.614 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:51.614 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:51.614 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:51.614 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:41:51.614 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:51.614 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:41:51.614 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:51.614 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:51.614 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:41:51.614 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:41:51.614 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVjOWI1YTRlNTc3ZDQ2ODVjZjQyNzM5ZmE2NTdjODeCMRnM: 00:41:51.614 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2U3NjljZDNjMDUwZjI1ZTJhYjU2YjI3NTBiNjI0M2YwNmQ2OWM5OGM2OGU0YjQ0YTQyOTMyZDhjZjRmZmU5Zrgj/jo=: 00:41:51.614 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:51.614 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:41:51.614 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVjOWI1YTRlNTc3ZDQ2ODVjZjQyNzM5ZmE2NTdjODeCMRnM: 00:41:51.614 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2U3NjljZDNjMDUwZjI1ZTJhYjU2YjI3NTBiNjI0M2YwNmQ2OWM5OGM2OGU0YjQ0YTQyOTMyZDhjZjRmZmU5Zrgj/jo=: ]] 00:41:51.614 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2U3NjljZDNjMDUwZjI1ZTJhYjU2YjI3NTBiNjI0M2YwNmQ2OWM5OGM2OGU0YjQ0YTQyOTMyZDhjZjRmZmU5Zrgj/jo=: 00:41:51.614 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:41:51.614 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:51.614 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:51.614 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:41:51.614 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:41:51.614 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:51.614 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:41:51.614 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:51.614 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:51.614 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:51.614 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:51.614 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:51.614 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:51.614 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:51.614 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:51.614 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:51.614 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:51.614 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:51.614 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:51.614 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:51.614 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:51.614 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:51.614 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:51.614 14:02:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:51.874 nvme0n1 00:41:51.874 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:51.874 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:51.874 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:51.874 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:51.874 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:51.875 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:51.875 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:51.875 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:51.875 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:51.875 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:51.875 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:51.875 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:51.875 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:41:51.875 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:51.875 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:51.875 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:41:51.875 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:51.875 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjVlMWFkMGIwNWQ5MzU3N2Q2ZDFlYjYzOTYwNTY4OTVjNzhhZmU2ZjExYWZlY2E08OFetQ==: 00:41:51.875 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjY4OGZjOWUzYmQyZTM1MTEyMGU4YTM4YmIzOGM0ZGI0ZjgxNmFjZGYxMWIzMTk4UrOiKg==: 00:41:51.875 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:51.875 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:41:51.875 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjVlMWFkMGIwNWQ5MzU3N2Q2ZDFlYjYzOTYwNTY4OTVjNzhhZmU2ZjExYWZlY2E08OFetQ==: 00:41:51.875 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjY4OGZjOWUzYmQyZTM1MTEyMGU4YTM4YmIzOGM0ZGI0ZjgxNmFjZGYxMWIzMTk4UrOiKg==: ]] 00:41:51.875 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjY4OGZjOWUzYmQyZTM1MTEyMGU4YTM4YmIzOGM0ZGI0ZjgxNmFjZGYxMWIzMTk4UrOiKg==: 00:41:51.875 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:41:51.875 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:51.875 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:51.875 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:41:51.875 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:51.875 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:51.875 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:41:51.875 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:51.875 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:51.875 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:51.875 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:51.875 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:51.875 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:51.875 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:51.875 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:51.875 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:51.875 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:51.875 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:51.875 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:51.875 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:51.875 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:51.875 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:51.875 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:51.875 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:52.135 nvme0n1 00:41:52.135 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:52.135 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:52.135 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:52.135 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:52.135 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:52.135 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:52.135 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:52.135 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:52.135 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:52.135 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:52.135 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:52.135 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:52.135 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:41:52.135 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:52.135 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:52.135 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:41:52.135 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:52.135 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODUwOWNkNGI1OWY1MTA5NDdhNTc4YzdiZjdkMTY2NTEXtW+3: 00:41:52.135 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWNkNGRmYTViNmZiYTViNTc2ZTk2YThiMTNkZmEyN2VWPTM1: 00:41:52.135 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:52.135 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:41:52.135 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODUwOWNkNGI1OWY1MTA5NDdhNTc4YzdiZjdkMTY2NTEXtW+3: 00:41:52.135 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWNkNGRmYTViNmZiYTViNTc2ZTk2YThiMTNkZmEyN2VWPTM1: ]] 00:41:52.135 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWNkNGRmYTViNmZiYTViNTc2ZTk2YThiMTNkZmEyN2VWPTM1: 00:41:52.135 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:41:52.135 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:52.135 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:52.135 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:41:52.135 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:41:52.135 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:52.135 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:41:52.135 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:52.135 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:52.135 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:52.135 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:52.135 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:52.135 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:52.135 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:52.135 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:52.135 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:52.135 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:52.135 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:52.135 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:52.135 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:52.135 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:52.135 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:52.135 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:52.135 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:52.135 nvme0n1 00:41:52.135 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:52.135 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:52.135 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:52.135 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:52.135 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:52.135 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2E5NmRiZjM3YzhhNWMyNzQ4MWYyNjI5MmZjN2M5NjhhNGI3YzI0N2UzZjNlZjJighm8Fw==: 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2Q3ZDFkM2ZjYTAzMWQ0MzU4MGY1ZGUwMjJmY2I2OTgUXj2A: 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2E5NmRiZjM3YzhhNWMyNzQ4MWYyNjI5MmZjN2M5NjhhNGI3YzI0N2UzZjNlZjJighm8Fw==: 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2Q3ZDFkM2ZjYTAzMWQ0MzU4MGY1ZGUwMjJmY2I2OTgUXj2A: ]] 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2Q3ZDFkM2ZjYTAzMWQ0MzU4MGY1ZGUwMjJmY2I2OTgUXj2A: 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:52.395 nvme0n1 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWMxOTdkOTE3ODViY2Y2ZDViMjY0YTE5OGQyNmU0NjMyODc5YTQ5YmVlYmQ0ZDg1YmUyMzA3Yzg3OTc2ZWZkN6e8k/E=: 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWMxOTdkOTE3ODViY2Y2ZDViMjY0YTE5OGQyNmU0NjMyODc5YTQ5YmVlYmQ0ZDg1YmUyMzA3Yzg3OTc2ZWZkN6e8k/E=: 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:52.395 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:52.655 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:52.655 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:52.655 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:52.655 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:52.655 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:52.655 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:52.655 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:52.655 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:52.655 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:52.655 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:52.655 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:52.655 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:41:52.655 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:52.655 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:52.655 nvme0n1 00:41:52.655 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:52.655 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:52.655 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:52.655 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:52.655 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:52.655 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:52.655 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:52.655 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:52.655 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:52.655 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:52.655 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:52.655 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:41:52.655 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:52.655 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:41:52.655 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:52.655 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:52.655 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:41:52.655 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:41:52.655 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVjOWI1YTRlNTc3ZDQ2ODVjZjQyNzM5ZmE2NTdjODeCMRnM: 00:41:52.655 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2U3NjljZDNjMDUwZjI1ZTJhYjU2YjI3NTBiNjI0M2YwNmQ2OWM5OGM2OGU0YjQ0YTQyOTMyZDhjZjRmZmU5Zrgj/jo=: 00:41:52.655 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:52.655 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:41:52.655 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVjOWI1YTRlNTc3ZDQ2ODVjZjQyNzM5ZmE2NTdjODeCMRnM: 00:41:52.655 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2U3NjljZDNjMDUwZjI1ZTJhYjU2YjI3NTBiNjI0M2YwNmQ2OWM5OGM2OGU0YjQ0YTQyOTMyZDhjZjRmZmU5Zrgj/jo=: ]] 00:41:52.655 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2U3NjljZDNjMDUwZjI1ZTJhYjU2YjI3NTBiNjI0M2YwNmQ2OWM5OGM2OGU0YjQ0YTQyOTMyZDhjZjRmZmU5Zrgj/jo=: 00:41:52.655 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:41:52.655 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:52.655 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:52.655 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:41:52.655 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:41:52.655 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:52.655 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:41:52.655 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:52.655 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:52.655 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:52.655 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:52.655 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:52.655 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:52.655 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:52.655 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:52.655 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:52.655 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:52.655 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:52.655 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:52.655 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:52.655 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:52.655 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:52.655 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:52.655 14:02:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:52.914 nvme0n1 00:41:52.914 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:52.914 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:52.914 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:52.914 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:52.914 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:52.914 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:52.914 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:52.914 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:52.914 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:52.914 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:52.914 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:52.914 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:52.914 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:41:52.914 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:52.914 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:52.914 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:41:52.914 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:52.914 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjVlMWFkMGIwNWQ5MzU3N2Q2ZDFlYjYzOTYwNTY4OTVjNzhhZmU2ZjExYWZlY2E08OFetQ==: 00:41:52.914 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjY4OGZjOWUzYmQyZTM1MTEyMGU4YTM4YmIzOGM0ZGI0ZjgxNmFjZGYxMWIzMTk4UrOiKg==: 00:41:52.914 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:52.914 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:41:52.914 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjVlMWFkMGIwNWQ5MzU3N2Q2ZDFlYjYzOTYwNTY4OTVjNzhhZmU2ZjExYWZlY2E08OFetQ==: 00:41:52.914 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjY4OGZjOWUzYmQyZTM1MTEyMGU4YTM4YmIzOGM0ZGI0ZjgxNmFjZGYxMWIzMTk4UrOiKg==: ]] 00:41:52.914 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjY4OGZjOWUzYmQyZTM1MTEyMGU4YTM4YmIzOGM0ZGI0ZjgxNmFjZGYxMWIzMTk4UrOiKg==: 00:41:52.914 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:41:52.914 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:52.914 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:52.914 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:41:52.914 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:52.914 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:52.914 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:41:52.914 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:52.914 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:52.914 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:52.914 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:52.915 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:52.915 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:52.915 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:52.915 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:52.915 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:52.915 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:52.915 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:52.915 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:52.915 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:52.915 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:52.915 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:52.915 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:52.915 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:53.174 nvme0n1 00:41:53.174 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:53.174 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:53.174 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:53.174 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:53.174 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:53.174 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:53.174 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:53.174 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:53.174 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:53.174 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:53.174 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:53.174 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:53.174 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:41:53.174 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:53.174 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:53.174 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:41:53.174 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:53.174 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODUwOWNkNGI1OWY1MTA5NDdhNTc4YzdiZjdkMTY2NTEXtW+3: 00:41:53.174 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWNkNGRmYTViNmZiYTViNTc2ZTk2YThiMTNkZmEyN2VWPTM1: 00:41:53.174 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:53.174 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:41:53.174 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODUwOWNkNGI1OWY1MTA5NDdhNTc4YzdiZjdkMTY2NTEXtW+3: 00:41:53.174 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWNkNGRmYTViNmZiYTViNTc2ZTk2YThiMTNkZmEyN2VWPTM1: ]] 00:41:53.174 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWNkNGRmYTViNmZiYTViNTc2ZTk2YThiMTNkZmEyN2VWPTM1: 00:41:53.174 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:41:53.174 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:53.174 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:53.174 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:41:53.174 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:41:53.174 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:53.174 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:41:53.174 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:53.174 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:53.174 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:53.174 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:53.174 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:53.174 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:53.174 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:53.174 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:53.174 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:53.174 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:53.174 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:53.174 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:53.174 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:53.174 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:53.174 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:53.174 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:53.174 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:53.434 nvme0n1 00:41:53.434 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:53.434 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:53.434 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:53.434 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:53.434 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:53.434 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:53.434 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:53.434 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:53.434 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:53.434 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:53.434 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:53.434 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:53.434 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:41:53.434 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:53.434 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:53.434 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:41:53.434 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:41:53.434 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2E5NmRiZjM3YzhhNWMyNzQ4MWYyNjI5MmZjN2M5NjhhNGI3YzI0N2UzZjNlZjJighm8Fw==: 00:41:53.434 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2Q3ZDFkM2ZjYTAzMWQ0MzU4MGY1ZGUwMjJmY2I2OTgUXj2A: 00:41:53.434 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:53.434 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:41:53.434 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2E5NmRiZjM3YzhhNWMyNzQ4MWYyNjI5MmZjN2M5NjhhNGI3YzI0N2UzZjNlZjJighm8Fw==: 00:41:53.434 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2Q3ZDFkM2ZjYTAzMWQ0MzU4MGY1ZGUwMjJmY2I2OTgUXj2A: ]] 00:41:53.434 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2Q3ZDFkM2ZjYTAzMWQ0MzU4MGY1ZGUwMjJmY2I2OTgUXj2A: 00:41:53.434 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:41:53.434 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:53.434 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:53.434 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:41:53.434 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:41:53.434 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:53.434 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:41:53.434 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:53.434 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:53.434 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:53.434 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:53.434 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:53.434 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:53.434 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:53.434 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:53.434 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:53.434 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:53.434 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:53.434 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:53.434 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:53.434 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:53.434 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:41:53.434 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:53.434 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:53.694 nvme0n1 00:41:53.694 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:53.694 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:53.694 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:53.694 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:53.694 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:53.695 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:53.695 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:53.695 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:53.695 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:53.695 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:53.695 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:53.695 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:53.695 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:41:53.695 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:53.695 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:53.695 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:41:53.695 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:41:53.695 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWMxOTdkOTE3ODViY2Y2ZDViMjY0YTE5OGQyNmU0NjMyODc5YTQ5YmVlYmQ0ZDg1YmUyMzA3Yzg3OTc2ZWZkN6e8k/E=: 00:41:53.695 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:41:53.695 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:53.695 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:41:53.695 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWMxOTdkOTE3ODViY2Y2ZDViMjY0YTE5OGQyNmU0NjMyODc5YTQ5YmVlYmQ0ZDg1YmUyMzA3Yzg3OTc2ZWZkN6e8k/E=: 00:41:53.695 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:41:53.695 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:41:53.695 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:53.695 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:53.695 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:41:53.695 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:41:53.695 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:53.695 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:41:53.695 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:53.695 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:53.695 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:53.695 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:53.695 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:53.695 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:53.695 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:53.695 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:53.695 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:53.695 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:53.695 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:53.695 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:53.695 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:53.695 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:53.695 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:41:53.695 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:53.695 14:02:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:53.954 nvme0n1 00:41:53.954 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:53.954 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:53.954 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:53.954 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:53.954 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:53.954 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:53.954 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:53.954 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:53.954 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:53.954 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:53.954 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:53.954 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:41:53.954 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:53.954 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:41:53.954 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:53.954 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:53.954 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:41:53.954 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:41:53.954 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVjOWI1YTRlNTc3ZDQ2ODVjZjQyNzM5ZmE2NTdjODeCMRnM: 00:41:53.954 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2U3NjljZDNjMDUwZjI1ZTJhYjU2YjI3NTBiNjI0M2YwNmQ2OWM5OGM2OGU0YjQ0YTQyOTMyZDhjZjRmZmU5Zrgj/jo=: 00:41:53.954 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:53.954 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:41:53.954 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVjOWI1YTRlNTc3ZDQ2ODVjZjQyNzM5ZmE2NTdjODeCMRnM: 00:41:53.955 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2U3NjljZDNjMDUwZjI1ZTJhYjU2YjI3NTBiNjI0M2YwNmQ2OWM5OGM2OGU0YjQ0YTQyOTMyZDhjZjRmZmU5Zrgj/jo=: ]] 00:41:53.955 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2U3NjljZDNjMDUwZjI1ZTJhYjU2YjI3NTBiNjI0M2YwNmQ2OWM5OGM2OGU0YjQ0YTQyOTMyZDhjZjRmZmU5Zrgj/jo=: 00:41:53.955 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:41:53.955 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:53.955 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:53.955 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:41:53.955 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:41:53.955 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:53.955 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:41:53.955 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:53.955 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:53.955 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:53.955 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:53.955 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:53.955 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:53.955 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:53.955 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:53.955 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:53.955 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:53.955 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:53.955 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:53.955 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:53.955 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:53.955 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:53.955 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:53.955 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:54.215 nvme0n1 00:41:54.215 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:54.215 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:54.215 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:54.215 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:54.215 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:54.215 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:54.475 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:54.475 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:54.475 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:54.475 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:54.475 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:54.475 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:54.475 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:41:54.475 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:54.475 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:54.475 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:41:54.475 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:54.475 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjVlMWFkMGIwNWQ5MzU3N2Q2ZDFlYjYzOTYwNTY4OTVjNzhhZmU2ZjExYWZlY2E08OFetQ==: 00:41:54.475 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjY4OGZjOWUzYmQyZTM1MTEyMGU4YTM4YmIzOGM0ZGI0ZjgxNmFjZGYxMWIzMTk4UrOiKg==: 00:41:54.475 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:54.475 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:41:54.475 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjVlMWFkMGIwNWQ5MzU3N2Q2ZDFlYjYzOTYwNTY4OTVjNzhhZmU2ZjExYWZlY2E08OFetQ==: 00:41:54.475 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjY4OGZjOWUzYmQyZTM1MTEyMGU4YTM4YmIzOGM0ZGI0ZjgxNmFjZGYxMWIzMTk4UrOiKg==: ]] 00:41:54.475 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjY4OGZjOWUzYmQyZTM1MTEyMGU4YTM4YmIzOGM0ZGI0ZjgxNmFjZGYxMWIzMTk4UrOiKg==: 00:41:54.475 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:41:54.475 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:54.475 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:54.475 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:41:54.475 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:54.475 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:54.475 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:41:54.475 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:54.475 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:54.475 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:54.475 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:54.475 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:54.475 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:54.475 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:54.475 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:54.475 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:54.475 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:54.475 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:54.475 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:54.475 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:54.475 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:54.475 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:54.475 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:54.475 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:54.734 nvme0n1 00:41:54.734 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:54.734 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:54.734 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:54.734 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:54.734 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:54.734 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:54.735 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:54.735 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:54.735 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:54.735 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:54.735 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:54.735 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:54.735 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:41:54.735 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:54.735 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:54.735 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:41:54.735 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:54.735 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODUwOWNkNGI1OWY1MTA5NDdhNTc4YzdiZjdkMTY2NTEXtW+3: 00:41:54.735 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWNkNGRmYTViNmZiYTViNTc2ZTk2YThiMTNkZmEyN2VWPTM1: 00:41:54.735 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:54.735 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:41:54.735 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODUwOWNkNGI1OWY1MTA5NDdhNTc4YzdiZjdkMTY2NTEXtW+3: 00:41:54.735 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWNkNGRmYTViNmZiYTViNTc2ZTk2YThiMTNkZmEyN2VWPTM1: ]] 00:41:54.735 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWNkNGRmYTViNmZiYTViNTc2ZTk2YThiMTNkZmEyN2VWPTM1: 00:41:54.735 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:41:54.735 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:54.735 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:54.735 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:41:54.735 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:41:54.735 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:54.735 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:41:54.735 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:54.735 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:54.735 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:54.735 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:54.735 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:54.735 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:54.735 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:54.735 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:54.735 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:54.735 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:54.735 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:54.735 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:54.735 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:54.735 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:54.735 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:54.735 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:54.735 14:02:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:54.994 nvme0n1 00:41:54.994 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:54.994 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:54.994 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:54.994 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:54.994 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:54.994 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:55.253 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:55.253 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:55.253 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:55.253 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:55.253 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:55.253 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:55.253 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:41:55.253 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:55.253 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:55.253 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:41:55.253 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:41:55.253 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2E5NmRiZjM3YzhhNWMyNzQ4MWYyNjI5MmZjN2M5NjhhNGI3YzI0N2UzZjNlZjJighm8Fw==: 00:41:55.253 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2Q3ZDFkM2ZjYTAzMWQ0MzU4MGY1ZGUwMjJmY2I2OTgUXj2A: 00:41:55.253 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:55.253 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:41:55.253 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2E5NmRiZjM3YzhhNWMyNzQ4MWYyNjI5MmZjN2M5NjhhNGI3YzI0N2UzZjNlZjJighm8Fw==: 00:41:55.253 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2Q3ZDFkM2ZjYTAzMWQ0MzU4MGY1ZGUwMjJmY2I2OTgUXj2A: ]] 00:41:55.253 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2Q3ZDFkM2ZjYTAzMWQ0MzU4MGY1ZGUwMjJmY2I2OTgUXj2A: 00:41:55.253 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:41:55.254 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:55.254 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:55.254 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:41:55.254 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:41:55.254 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:55.254 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:41:55.254 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:55.254 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:55.254 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:55.254 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:55.254 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:55.254 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:55.254 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:55.254 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:55.254 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:55.254 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:55.254 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:55.254 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:55.254 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:55.254 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:55.254 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:41:55.254 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:55.254 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:55.513 nvme0n1 00:41:55.513 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:55.513 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:55.513 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:55.513 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:55.513 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:55.513 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:55.513 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:55.513 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:55.513 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:55.513 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:55.513 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:55.513 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:55.513 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:41:55.513 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:55.513 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:55.513 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:41:55.513 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:41:55.513 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWMxOTdkOTE3ODViY2Y2ZDViMjY0YTE5OGQyNmU0NjMyODc5YTQ5YmVlYmQ0ZDg1YmUyMzA3Yzg3OTc2ZWZkN6e8k/E=: 00:41:55.513 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:41:55.513 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:55.514 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:41:55.514 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWMxOTdkOTE3ODViY2Y2ZDViMjY0YTE5OGQyNmU0NjMyODc5YTQ5YmVlYmQ0ZDg1YmUyMzA3Yzg3OTc2ZWZkN6e8k/E=: 00:41:55.514 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:41:55.514 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:41:55.514 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:55.514 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:55.514 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:41:55.514 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:41:55.514 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:55.514 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:41:55.514 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:55.514 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:55.514 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:55.514 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:55.514 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:55.514 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:55.514 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:55.514 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:55.514 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:55.514 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:55.514 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:55.514 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:55.514 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:55.514 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:55.514 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:41:55.514 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:55.514 14:02:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:55.773 nvme0n1 00:41:55.773 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:55.773 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:55.773 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:55.773 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:55.773 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:55.773 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:56.068 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:56.068 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:56.068 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:56.068 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:56.068 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:56.068 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:41:56.068 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:56.068 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:41:56.068 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:56.068 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:56.068 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:41:56.068 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:41:56.068 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVjOWI1YTRlNTc3ZDQ2ODVjZjQyNzM5ZmE2NTdjODeCMRnM: 00:41:56.068 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2U3NjljZDNjMDUwZjI1ZTJhYjU2YjI3NTBiNjI0M2YwNmQ2OWM5OGM2OGU0YjQ0YTQyOTMyZDhjZjRmZmU5Zrgj/jo=: 00:41:56.068 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:56.068 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:41:56.069 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVjOWI1YTRlNTc3ZDQ2ODVjZjQyNzM5ZmE2NTdjODeCMRnM: 00:41:56.069 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2U3NjljZDNjMDUwZjI1ZTJhYjU2YjI3NTBiNjI0M2YwNmQ2OWM5OGM2OGU0YjQ0YTQyOTMyZDhjZjRmZmU5Zrgj/jo=: ]] 00:41:56.069 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2U3NjljZDNjMDUwZjI1ZTJhYjU2YjI3NTBiNjI0M2YwNmQ2OWM5OGM2OGU0YjQ0YTQyOTMyZDhjZjRmZmU5Zrgj/jo=: 00:41:56.069 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:41:56.069 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:56.069 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:56.069 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:41:56.069 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:41:56.069 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:56.069 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:41:56.069 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:56.069 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:56.069 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:56.069 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:56.069 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:56.069 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:56.069 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:56.069 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:56.069 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:56.069 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:56.069 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:56.069 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:56.069 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:56.069 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:56.069 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:56.069 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:56.069 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:56.328 nvme0n1 00:41:56.328 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:56.328 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:56.328 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:56.328 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:56.328 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:56.588 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:56.588 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:56.588 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:56.588 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:56.588 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:56.588 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:56.588 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:56.588 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:41:56.588 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:56.588 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:56.588 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:41:56.588 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:56.588 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjVlMWFkMGIwNWQ5MzU3N2Q2ZDFlYjYzOTYwNTY4OTVjNzhhZmU2ZjExYWZlY2E08OFetQ==: 00:41:56.588 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjY4OGZjOWUzYmQyZTM1MTEyMGU4YTM4YmIzOGM0ZGI0ZjgxNmFjZGYxMWIzMTk4UrOiKg==: 00:41:56.588 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:56.588 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:41:56.588 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjVlMWFkMGIwNWQ5MzU3N2Q2ZDFlYjYzOTYwNTY4OTVjNzhhZmU2ZjExYWZlY2E08OFetQ==: 00:41:56.588 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjY4OGZjOWUzYmQyZTM1MTEyMGU4YTM4YmIzOGM0ZGI0ZjgxNmFjZGYxMWIzMTk4UrOiKg==: ]] 00:41:56.588 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjY4OGZjOWUzYmQyZTM1MTEyMGU4YTM4YmIzOGM0ZGI0ZjgxNmFjZGYxMWIzMTk4UrOiKg==: 00:41:56.588 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:41:56.588 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:56.588 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:56.588 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:41:56.588 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:41:56.588 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:56.588 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:41:56.588 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:56.588 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:56.588 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:56.588 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:56.588 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:56.588 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:56.588 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:56.588 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:56.588 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:56.588 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:56.589 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:56.589 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:56.589 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:56.589 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:56.589 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:56.589 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:56.589 14:02:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:57.157 nvme0n1 00:41:57.157 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:57.158 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:57.158 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:57.158 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:57.158 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:57.158 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:57.158 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:57.158 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:57.158 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:57.158 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:57.158 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:57.158 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:57.158 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:41:57.158 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:57.158 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:57.158 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:41:57.158 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:57.158 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODUwOWNkNGI1OWY1MTA5NDdhNTc4YzdiZjdkMTY2NTEXtW+3: 00:41:57.158 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWNkNGRmYTViNmZiYTViNTc2ZTk2YThiMTNkZmEyN2VWPTM1: 00:41:57.158 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:57.158 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:41:57.158 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODUwOWNkNGI1OWY1MTA5NDdhNTc4YzdiZjdkMTY2NTEXtW+3: 00:41:57.158 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWNkNGRmYTViNmZiYTViNTc2ZTk2YThiMTNkZmEyN2VWPTM1: ]] 00:41:57.158 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWNkNGRmYTViNmZiYTViNTc2ZTk2YThiMTNkZmEyN2VWPTM1: 00:41:57.158 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:41:57.158 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:57.158 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:57.158 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:41:57.158 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:41:57.158 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:57.158 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:41:57.158 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:57.158 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:57.158 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:57.158 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:57.158 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:57.158 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:57.158 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:57.158 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:57.158 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:57.158 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:57.158 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:57.158 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:57.158 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:57.158 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:57.158 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:57.158 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:57.158 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:57.728 nvme0n1 00:41:57.728 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:57.728 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:57.728 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:57.728 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:57.728 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:57.728 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:57.728 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:57.728 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:57.728 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:57.728 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:57.728 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:57.728 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:57.728 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:41:57.728 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:57.728 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:57.728 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:41:57.728 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:41:57.728 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2E5NmRiZjM3YzhhNWMyNzQ4MWYyNjI5MmZjN2M5NjhhNGI3YzI0N2UzZjNlZjJighm8Fw==: 00:41:57.728 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2Q3ZDFkM2ZjYTAzMWQ0MzU4MGY1ZGUwMjJmY2I2OTgUXj2A: 00:41:57.728 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:57.728 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:41:57.728 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2E5NmRiZjM3YzhhNWMyNzQ4MWYyNjI5MmZjN2M5NjhhNGI3YzI0N2UzZjNlZjJighm8Fw==: 00:41:57.728 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2Q3ZDFkM2ZjYTAzMWQ0MzU4MGY1ZGUwMjJmY2I2OTgUXj2A: ]] 00:41:57.728 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2Q3ZDFkM2ZjYTAzMWQ0MzU4MGY1ZGUwMjJmY2I2OTgUXj2A: 00:41:57.728 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:41:57.728 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:57.728 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:57.728 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:41:57.728 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:41:57.728 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:57.728 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:41:57.729 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:57.729 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:57.729 14:02:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:57.729 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:57.729 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:57.729 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:57.729 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:57.729 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:57.729 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:57.729 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:57.729 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:57.729 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:57.729 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:57.729 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:57.729 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:41:57.729 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:57.729 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:58.666 nvme0n1 00:41:58.666 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:58.666 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:58.666 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:58.666 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:58.666 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:58.666 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:58.666 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:58.666 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:58.666 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:58.666 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:58.666 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:58.666 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:41:58.666 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:41:58.666 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:58.666 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:41:58.666 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:41:58.666 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:41:58.666 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWMxOTdkOTE3ODViY2Y2ZDViMjY0YTE5OGQyNmU0NjMyODc5YTQ5YmVlYmQ0ZDg1YmUyMzA3Yzg3OTc2ZWZkN6e8k/E=: 00:41:58.666 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:41:58.666 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:41:58.666 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:41:58.666 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWMxOTdkOTE3ODViY2Y2ZDViMjY0YTE5OGQyNmU0NjMyODc5YTQ5YmVlYmQ0ZDg1YmUyMzA3Yzg3OTc2ZWZkN6e8k/E=: 00:41:58.666 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:41:58.666 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:41:58.666 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:41:58.666 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:41:58.666 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:41:58.666 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:41:58.666 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:41:58.666 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:41:58.666 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:58.666 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:58.666 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:58.666 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:41:58.666 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:58.666 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:58.666 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:58.666 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:58.666 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:58.666 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:58.666 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:58.666 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:58.666 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:58.666 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:58.666 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:41:58.666 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:58.666 14:02:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:59.237 nvme0n1 00:41:59.237 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:59.237 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:41:59.237 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:41:59.237 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:59.237 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:59.237 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:59.237 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:59.237 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:41:59.237 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:59.237 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:59.237 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:59.237 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:41:59.237 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:59.237 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:59.237 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:59.237 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:41:59.237 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjVlMWFkMGIwNWQ5MzU3N2Q2ZDFlYjYzOTYwNTY4OTVjNzhhZmU2ZjExYWZlY2E08OFetQ==: 00:41:59.237 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjY4OGZjOWUzYmQyZTM1MTEyMGU4YTM4YmIzOGM0ZGI0ZjgxNmFjZGYxMWIzMTk4UrOiKg==: 00:41:59.237 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:59.237 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:59.237 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjVlMWFkMGIwNWQ5MzU3N2Q2ZDFlYjYzOTYwNTY4OTVjNzhhZmU2ZjExYWZlY2E08OFetQ==: 00:41:59.237 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjY4OGZjOWUzYmQyZTM1MTEyMGU4YTM4YmIzOGM0ZGI0ZjgxNmFjZGYxMWIzMTk4UrOiKg==: ]] 00:41:59.237 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjY4OGZjOWUzYmQyZTM1MTEyMGU4YTM4YmIzOGM0ZGI0ZjgxNmFjZGYxMWIzMTk4UrOiKg==: 00:41:59.237 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:41:59.237 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:59.237 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:59.237 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:59.237 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:41:59.237 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:59.237 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:59.237 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:59.237 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:59.237 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:59.237 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:59.237 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:59.237 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:59.237 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:59.237 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:59.237 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:41:59.237 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:41:59.237 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:41:59.237 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:41:59.237 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:59.237 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:41:59.237 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:59.237 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:41:59.237 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:59.237 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:59.237 request: 00:41:59.237 { 00:41:59.237 "name": "nvme0", 00:41:59.237 "trtype": "tcp", 00:41:59.237 "traddr": "10.0.0.1", 00:41:59.237 "adrfam": "ipv4", 00:41:59.237 "trsvcid": "4420", 00:41:59.237 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:41:59.237 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:41:59.237 "prchk_reftag": false, 00:41:59.237 "prchk_guard": false, 00:41:59.237 "hdgst": false, 00:41:59.237 "ddgst": false, 00:41:59.237 "allow_unrecognized_csi": false, 00:41:59.237 "method": "bdev_nvme_attach_controller", 00:41:59.237 "req_id": 1 00:41:59.237 } 00:41:59.237 Got JSON-RPC error response 00:41:59.237 response: 00:41:59.237 { 00:41:59.237 "code": -5, 00:41:59.237 "message": "Input/output error" 00:41:59.237 } 00:41:59.237 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:41:59.237 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:41:59.237 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:41:59.237 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:41:59.237 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:41:59.238 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:41:59.238 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:41:59.238 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:59.238 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:59.238 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:59.238 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:41:59.238 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:41:59.238 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:59.238 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:59.238 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:59.238 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:59.238 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:59.238 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:59.238 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:59.238 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:59.238 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:59.238 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:59.238 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:41:59.238 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:41:59.238 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:41:59.238 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:41:59.238 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:59.238 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:41:59.238 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:59.238 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:41:59.238 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:59.238 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:59.238 request: 00:41:59.238 { 00:41:59.238 "name": "nvme0", 00:41:59.238 "trtype": "tcp", 00:41:59.238 "traddr": "10.0.0.1", 00:41:59.238 "adrfam": "ipv4", 00:41:59.238 "trsvcid": "4420", 00:41:59.238 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:41:59.238 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:41:59.238 "prchk_reftag": false, 00:41:59.238 "prchk_guard": false, 00:41:59.238 "hdgst": false, 00:41:59.238 "ddgst": false, 00:41:59.238 "dhchap_key": "key2", 00:41:59.238 "allow_unrecognized_csi": false, 00:41:59.238 "method": "bdev_nvme_attach_controller", 00:41:59.238 "req_id": 1 00:41:59.238 } 00:41:59.238 Got JSON-RPC error response 00:41:59.238 response: 00:41:59.238 { 00:41:59.238 "code": -5, 00:41:59.238 "message": "Input/output error" 00:41:59.238 } 00:41:59.238 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:41:59.238 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:41:59.238 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:41:59.238 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:41:59.238 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:41:59.238 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:41:59.238 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:41:59.238 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:59.238 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:59.238 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:59.498 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:41:59.498 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:41:59.498 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:59.498 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:59.498 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:59.498 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:59.498 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:59.498 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:59.498 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:59.498 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:59.498 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:59.498 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:59.498 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:41:59.498 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:41:59.498 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:41:59.498 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:41:59.498 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:59.498 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:41:59.498 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:59.498 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:41:59.498 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:59.498 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:59.498 request: 00:41:59.498 { 00:41:59.498 "name": "nvme0", 00:41:59.498 "trtype": "tcp", 00:41:59.498 "traddr": "10.0.0.1", 00:41:59.498 "adrfam": "ipv4", 00:41:59.498 "trsvcid": "4420", 00:41:59.498 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:41:59.498 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:41:59.498 "prchk_reftag": false, 00:41:59.498 "prchk_guard": false, 00:41:59.498 "hdgst": false, 00:41:59.498 "ddgst": false, 00:41:59.498 "dhchap_key": "key1", 00:41:59.498 "dhchap_ctrlr_key": "ckey2", 00:41:59.498 "allow_unrecognized_csi": false, 00:41:59.498 "method": "bdev_nvme_attach_controller", 00:41:59.498 "req_id": 1 00:41:59.498 } 00:41:59.498 Got JSON-RPC error response 00:41:59.498 response: 00:41:59.498 { 00:41:59.498 "code": -5, 00:41:59.498 "message": "Input/output error" 00:41:59.498 } 00:41:59.498 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:41:59.498 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:41:59.498 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:41:59.498 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:41:59.498 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:41:59.498 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:41:59.498 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:41:59.499 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:41:59.499 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:41:59.499 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:59.499 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:59.499 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:41:59.499 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:59.499 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:41:59.499 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:41:59.499 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:41:59.499 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:41:59.499 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:59.499 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:59.499 nvme0n1 00:41:59.499 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:59.499 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:41:59.499 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:41:59.499 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:41:59.499 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:41:59.499 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:41:59.499 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODUwOWNkNGI1OWY1MTA5NDdhNTc4YzdiZjdkMTY2NTEXtW+3: 00:41:59.499 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWNkNGRmYTViNmZiYTViNTc2ZTk2YThiMTNkZmEyN2VWPTM1: 00:41:59.499 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:41:59.499 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:41:59.499 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODUwOWNkNGI1OWY1MTA5NDdhNTc4YzdiZjdkMTY2NTEXtW+3: 00:41:59.499 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWNkNGRmYTViNmZiYTViNTc2ZTk2YThiMTNkZmEyN2VWPTM1: ]] 00:41:59.499 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWNkNGRmYTViNmZiYTViNTc2ZTk2YThiMTNkZmEyN2VWPTM1: 00:41:59.499 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:59.499 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:59.499 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:59.499 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:59.499 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:41:59.499 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:41:59.499 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:59.499 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:59.499 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:59.499 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:59.499 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:41:59.499 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:41:59.499 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:41:59.499 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:41:59.499 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:59.499 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:41:59.499 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:59.499 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:41:59.499 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:59.499 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:59.758 request: 00:41:59.758 { 00:41:59.758 "name": "nvme0", 00:41:59.758 "dhchap_key": "key1", 00:41:59.758 "dhchap_ctrlr_key": "ckey2", 00:41:59.758 "method": "bdev_nvme_set_keys", 00:41:59.758 "req_id": 1 00:41:59.758 } 00:41:59.758 Got JSON-RPC error response 00:41:59.758 response: 00:41:59.758 { 00:41:59.758 "code": -5, 00:41:59.758 "message": "Input/output error" 00:41:59.758 } 00:41:59.758 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:41:59.758 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:41:59.758 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:41:59.758 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:41:59.758 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:41:59.758 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:41:59.758 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:59.758 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:59.758 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:41:59.758 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:59.758 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:41:59.758 14:02:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:42:00.696 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:42:00.696 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:42:00.696 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:00.696 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:00.696 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:00.696 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:42:00.696 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:42:00.696 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:00.696 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:42:00.696 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:42:00.696 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:42:00.696 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjVlMWFkMGIwNWQ5MzU3N2Q2ZDFlYjYzOTYwNTY4OTVjNzhhZmU2ZjExYWZlY2E08OFetQ==: 00:42:00.696 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjY4OGZjOWUzYmQyZTM1MTEyMGU4YTM4YmIzOGM0ZGI0ZjgxNmFjZGYxMWIzMTk4UrOiKg==: 00:42:00.696 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:42:00.696 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:42:00.696 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjVlMWFkMGIwNWQ5MzU3N2Q2ZDFlYjYzOTYwNTY4OTVjNzhhZmU2ZjExYWZlY2E08OFetQ==: 00:42:00.696 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjY4OGZjOWUzYmQyZTM1MTEyMGU4YTM4YmIzOGM0ZGI0ZjgxNmFjZGYxMWIzMTk4UrOiKg==: ]] 00:42:00.696 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjY4OGZjOWUzYmQyZTM1MTEyMGU4YTM4YmIzOGM0ZGI0ZjgxNmFjZGYxMWIzMTk4UrOiKg==: 00:42:00.696 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:42:00.696 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:42:00.696 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:42:00.696 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:42:00.696 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:00.696 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:00.696 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:42:00.696 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:00.696 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:42:00.696 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:42:00.696 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:42:00.696 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:42:00.696 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:00.696 14:02:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:00.957 nvme0n1 00:42:00.957 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:00.957 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:42:00.957 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:00.957 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:42:00.957 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:42:00.957 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:42:00.957 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODUwOWNkNGI1OWY1MTA5NDdhNTc4YzdiZjdkMTY2NTEXtW+3: 00:42:00.957 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWNkNGRmYTViNmZiYTViNTc2ZTk2YThiMTNkZmEyN2VWPTM1: 00:42:00.957 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:42:00.957 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:42:00.957 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODUwOWNkNGI1OWY1MTA5NDdhNTc4YzdiZjdkMTY2NTEXtW+3: 00:42:00.957 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWNkNGRmYTViNmZiYTViNTc2ZTk2YThiMTNkZmEyN2VWPTM1: ]] 00:42:00.957 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWNkNGRmYTViNmZiYTViNTc2ZTk2YThiMTNkZmEyN2VWPTM1: 00:42:00.957 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:42:00.957 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:42:00.957 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:42:00.957 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:42:00.957 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:00.957 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:42:00.957 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:00.957 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:42:00.957 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:00.957 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:00.957 request: 00:42:00.957 { 00:42:00.957 "name": "nvme0", 00:42:00.957 "dhchap_key": "key2", 00:42:00.957 "dhchap_ctrlr_key": "ckey1", 00:42:00.957 "method": "bdev_nvme_set_keys", 00:42:00.957 "req_id": 1 00:42:00.957 } 00:42:00.957 Got JSON-RPC error response 00:42:00.957 response: 00:42:00.957 { 00:42:00.957 "code": -13, 00:42:00.957 "message": "Permission denied" 00:42:00.957 } 00:42:00.957 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:42:00.957 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:42:00.957 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:42:00.957 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:42:00.957 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:42:00.957 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:42:00.957 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:00.957 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:42:00.957 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:00.957 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:00.957 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:42:00.957 14:02:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:42:01.895 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:42:01.895 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:01.895 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:42:01.895 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:01.895 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:01.895 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:42:01.895 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:42:01.895 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:42:01.895 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:42:01.895 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:01.895 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:42:01.895 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:01.895 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:42:01.895 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:01.895 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:01.895 rmmod nvme_tcp 00:42:01.895 rmmod nvme_fabrics 00:42:02.154 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:02.154 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:42:02.154 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:42:02.154 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 78366 ']' 00:42:02.154 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 78366 00:42:02.154 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 78366 ']' 00:42:02.154 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 78366 00:42:02.154 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:42:02.154 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:02.154 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78366 00:42:02.154 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:02.154 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:02.154 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78366' 00:42:02.154 killing process with pid 78366 00:42:02.155 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 78366 00:42:02.155 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 78366 00:42:02.155 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:02.155 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:02.155 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:02.155 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:42:02.155 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:42:02.155 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:02.155 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:42:02.155 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:02.155 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:42:02.155 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:42:02.155 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:42:02.414 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:42:02.414 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:42:02.414 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:42:02.414 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:42:02.414 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:42:02.414 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:42:02.414 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:42:02.414 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:42:02.414 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:42:02.414 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:42:02.414 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:42:02.414 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:42:02.414 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:02.414 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:02.414 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:02.414 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:42:02.414 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:42:02.414 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:42:02.414 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:42:02.415 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:42:02.415 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:42:02.674 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:42:02.675 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:42:02.675 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:42:02.675 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:42:02.675 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:42:02.675 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:42:02.675 14:02:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:42:03.615 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:42:03.615 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:42:03.615 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:42:03.615 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.1aB /tmp/spdk.key-null.xqJ /tmp/spdk.key-sha256.hyW /tmp/spdk.key-sha384.riG /tmp/spdk.key-sha512.E6K /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:42:03.615 14:03:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:42:04.185 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:42:04.185 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:42:04.185 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:42:04.185 ************************************ 00:42:04.185 END TEST nvmf_auth_host 00:42:04.185 ************************************ 00:42:04.185 00:42:04.185 real 0m37.580s 00:42:04.185 user 0m34.718s 00:42:04.185 sys 0m5.335s 00:42:04.185 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:04.185 14:03:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:04.185 14:03:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:42:04.185 14:03:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:42:04.185 14:03:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:42:04.185 14:03:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:04.185 14:03:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:42:04.185 ************************************ 00:42:04.185 START TEST nvmf_digest 00:42:04.185 ************************************ 00:42:04.185 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:42:04.446 * Looking for test storage... 00:42:04.446 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:42:04.446 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:42:04.446 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:42:04.446 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:42:04.446 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:42:04.446 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:04.446 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:04.446 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:04.446 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:42:04.446 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:42:04.446 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:42:04.446 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:42:04.446 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:42:04.446 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:42:04.446 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:42:04.446 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:04.446 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:42:04.446 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:42:04.446 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:04.446 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:04.446 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:42:04.446 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:42:04.446 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:04.446 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:42:04.446 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:42:04.446 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:42:04.446 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:42:04.446 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:04.446 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:42:04.446 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:42:04.446 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:04.446 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:04.446 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:42:04.446 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:04.446 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:42:04.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:04.446 --rc genhtml_branch_coverage=1 00:42:04.446 --rc genhtml_function_coverage=1 00:42:04.446 --rc genhtml_legend=1 00:42:04.446 --rc geninfo_all_blocks=1 00:42:04.446 --rc geninfo_unexecuted_blocks=1 00:42:04.446 00:42:04.446 ' 00:42:04.446 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:42:04.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:04.446 --rc genhtml_branch_coverage=1 00:42:04.446 --rc genhtml_function_coverage=1 00:42:04.446 --rc genhtml_legend=1 00:42:04.446 --rc geninfo_all_blocks=1 00:42:04.446 --rc geninfo_unexecuted_blocks=1 00:42:04.446 00:42:04.446 ' 00:42:04.446 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:42:04.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:04.446 --rc genhtml_branch_coverage=1 00:42:04.446 --rc genhtml_function_coverage=1 00:42:04.446 --rc genhtml_legend=1 00:42:04.446 --rc geninfo_all_blocks=1 00:42:04.446 --rc geninfo_unexecuted_blocks=1 00:42:04.446 00:42:04.446 ' 00:42:04.446 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:42:04.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:04.446 --rc genhtml_branch_coverage=1 00:42:04.446 --rc genhtml_function_coverage=1 00:42:04.446 --rc genhtml_legend=1 00:42:04.447 --rc geninfo_all_blocks=1 00:42:04.447 --rc geninfo_unexecuted_blocks=1 00:42:04.447 00:42:04.447 ' 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=105ec898-1662-46bd-85be-b241e399edb9 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:04.447 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:42:04.447 Cannot find device "nvmf_init_br" 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:42:04.447 Cannot find device "nvmf_init_br2" 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:42:04.447 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:42:04.729 Cannot find device "nvmf_tgt_br" 00:42:04.729 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:42:04.729 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:42:04.729 Cannot find device "nvmf_tgt_br2" 00:42:04.729 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:42:04.729 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:42:04.729 Cannot find device "nvmf_init_br" 00:42:04.729 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:42:04.729 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:42:04.729 Cannot find device "nvmf_init_br2" 00:42:04.729 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:42:04.729 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:42:04.729 Cannot find device "nvmf_tgt_br" 00:42:04.729 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:42:04.729 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:42:04.729 Cannot find device "nvmf_tgt_br2" 00:42:04.729 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:42:04.729 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:42:04.729 Cannot find device "nvmf_br" 00:42:04.729 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:42:04.729 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:42:04.729 Cannot find device "nvmf_init_if" 00:42:04.729 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:42:04.729 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:42:04.729 Cannot find device "nvmf_init_if2" 00:42:04.729 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:42:04.729 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:42:04.729 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:42:04.729 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:42:04.729 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:42:04.729 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:42:04.729 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:42:04.729 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:42:04.729 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:42:04.729 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:42:04.729 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:42:04.729 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:42:04.729 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:42:04.729 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:42:04.729 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:42:04.729 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:42:04.729 14:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:42:04.729 14:03:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:42:04.729 14:03:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:42:04.729 14:03:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:42:04.729 14:03:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:42:04.729 14:03:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:42:04.729 14:03:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:42:04.729 14:03:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:42:04.729 14:03:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:42:04.729 14:03:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:42:05.015 14:03:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:42:05.016 14:03:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:42:05.016 14:03:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:42:05.016 14:03:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:42:05.016 14:03:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:42:05.016 14:03:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:42:05.016 14:03:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:42:05.016 14:03:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:42:05.016 14:03:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:42:05.016 14:03:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:42:05.016 14:03:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:42:05.016 14:03:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:42:05.016 14:03:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:42:05.016 14:03:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:42:05.016 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:42:05.016 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.129 ms 00:42:05.016 00:42:05.016 --- 10.0.0.3 ping statistics --- 00:42:05.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:05.016 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:42:05.016 14:03:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:42:05.016 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:42:05.016 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.078 ms 00:42:05.016 00:42:05.016 --- 10.0.0.4 ping statistics --- 00:42:05.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:05.016 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:42:05.016 14:03:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:42:05.016 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:05.016 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:42:05.016 00:42:05.016 --- 10.0.0.1 ping statistics --- 00:42:05.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:05.016 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:42:05.016 14:03:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:42:05.016 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:05.016 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:42:05.016 00:42:05.016 --- 10.0.0.2 ping statistics --- 00:42:05.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:05.016 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:42:05.016 14:03:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:05.016 14:03:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:42:05.016 14:03:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:05.016 14:03:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:05.016 14:03:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:05.016 14:03:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:05.016 14:03:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:05.016 14:03:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:05.016 14:03:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:05.016 14:03:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:42:05.016 14:03:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:42:05.016 14:03:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:42:05.016 14:03:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:05.016 14:03:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:05.016 14:03:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:42:05.016 ************************************ 00:42:05.016 START TEST nvmf_digest_clean 00:42:05.016 ************************************ 00:42:05.016 14:03:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:42:05.016 14:03:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:42:05.016 14:03:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:42:05.016 14:03:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:42:05.016 14:03:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:42:05.016 14:03:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:42:05.016 14:03:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:05.016 14:03:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:05.016 14:03:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:42:05.016 14:03:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=80022 00:42:05.016 14:03:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:42:05.016 14:03:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 80022 00:42:05.016 14:03:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80022 ']' 00:42:05.016 14:03:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:05.016 14:03:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:05.016 14:03:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:05.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:05.016 14:03:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:05.016 14:03:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:42:05.016 [2024-11-20 14:03:02.216233] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:42:05.016 [2024-11-20 14:03:02.216359] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:05.276 [2024-11-20 14:03:02.366025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:05.276 [2024-11-20 14:03:02.414995] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:05.276 [2024-11-20 14:03:02.415126] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:05.276 [2024-11-20 14:03:02.415160] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:05.276 [2024-11-20 14:03:02.415185] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:05.276 [2024-11-20 14:03:02.415200] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:05.276 [2024-11-20 14:03:02.415774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:05.845 14:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:05.845 14:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:42:05.845 14:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:05.845 14:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:05.845 14:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:42:05.845 14:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:05.845 14:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:42:05.845 14:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:42:05.845 14:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:42:05.845 14:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:05.845 14:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:42:06.104 [2024-11-20 14:03:03.172947] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:42:06.104 null0 00:42:06.104 [2024-11-20 14:03:03.221159] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:06.104 [2024-11-20 14:03:03.245228] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:42:06.104 14:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:06.104 14:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:42:06.104 14:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:42:06.104 14:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:42:06.104 14:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:42:06.104 14:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:42:06.105 14:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:42:06.105 14:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:42:06.105 14:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:42:06.105 14:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80054 00:42:06.105 14:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80054 /var/tmp/bperf.sock 00:42:06.105 14:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80054 ']' 00:42:06.105 14:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:06.105 14:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:06.105 14:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:06.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:06.105 14:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:06.105 14:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:42:06.105 [2024-11-20 14:03:03.290364] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:42:06.105 [2024-11-20 14:03:03.290496] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80054 ] 00:42:06.364 [2024-11-20 14:03:03.440896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:06.364 [2024-11-20 14:03:03.517946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:06.933 14:03:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:06.933 14:03:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:42:06.933 14:03:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:42:06.933 14:03:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:42:06.933 14:03:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:42:07.193 [2024-11-20 14:03:04.487027] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:42:07.453 14:03:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:42:07.453 14:03:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:42:07.712 nvme0n1 00:42:07.712 14:03:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:42:07.712 14:03:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:07.712 Running I/O for 2 seconds... 00:42:10.028 17907.00 IOPS, 69.95 MiB/s [2024-11-20T14:03:07.351Z] 17399.00 IOPS, 67.96 MiB/s 00:42:10.028 Latency(us) 00:42:10.028 [2024-11-20T14:03:07.351Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:10.028 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:42:10.028 nvme0n1 : 2.01 17346.56 67.76 0.00 0.00 7374.24 6267.42 23352.57 00:42:10.028 [2024-11-20T14:03:07.351Z] =================================================================================================================== 00:42:10.028 [2024-11-20T14:03:07.351Z] Total : 17346.56 67.76 0.00 0.00 7374.24 6267.42 23352.57 00:42:10.028 { 00:42:10.028 "results": [ 00:42:10.028 { 00:42:10.028 "job": "nvme0n1", 00:42:10.028 "core_mask": "0x2", 00:42:10.028 "workload": "randread", 00:42:10.028 "status": "finished", 00:42:10.028 "queue_depth": 128, 00:42:10.028 "io_size": 4096, 00:42:10.028 "runtime": 2.013425, 00:42:10.028 "iops": 17346.561207892024, 00:42:10.028 "mibps": 67.76000471832822, 00:42:10.028 "io_failed": 0, 00:42:10.028 "io_timeout": 0, 00:42:10.028 "avg_latency_us": 7374.243576750044, 00:42:10.028 "min_latency_us": 6267.416593886463, 00:42:10.028 "max_latency_us": 23352.565938864627 00:42:10.028 } 00:42:10.028 ], 00:42:10.028 "core_count": 1 00:42:10.028 } 00:42:10.028 14:03:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:42:10.028 14:03:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:42:10.028 14:03:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:42:10.028 14:03:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:42:10.028 | select(.opcode=="crc32c") 00:42:10.028 | "\(.module_name) \(.executed)"' 00:42:10.028 14:03:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:42:10.028 14:03:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:42:10.028 14:03:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:42:10.028 14:03:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:42:10.028 14:03:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:42:10.028 14:03:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80054 00:42:10.028 14:03:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80054 ']' 00:42:10.028 14:03:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80054 00:42:10.028 14:03:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:42:10.028 14:03:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:10.028 14:03:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80054 00:42:10.028 14:03:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:42:10.028 14:03:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:42:10.028 14:03:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80054' 00:42:10.028 killing process with pid 80054 00:42:10.028 Received shutdown signal, test time was about 2.000000 seconds 00:42:10.028 00:42:10.028 Latency(us) 00:42:10.028 [2024-11-20T14:03:07.351Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:10.028 [2024-11-20T14:03:07.351Z] =================================================================================================================== 00:42:10.028 [2024-11-20T14:03:07.351Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:10.028 14:03:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80054 00:42:10.028 14:03:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80054 00:42:10.289 14:03:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:42:10.289 14:03:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:42:10.289 14:03:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:42:10.289 14:03:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:42:10.289 14:03:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:42:10.289 14:03:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:42:10.289 14:03:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:42:10.289 14:03:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80110 00:42:10.289 14:03:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:42:10.289 14:03:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80110 /var/tmp/bperf.sock 00:42:10.289 14:03:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80110 ']' 00:42:10.289 14:03:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:10.289 14:03:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:10.289 14:03:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:10.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:10.289 14:03:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:10.289 14:03:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:42:10.549 [2024-11-20 14:03:07.625761] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:42:10.549 [2024-11-20 14:03:07.626483] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80110 ] 00:42:10.549 I/O size of 131072 is greater than zero copy threshold (65536). 00:42:10.549 Zero copy mechanism will not be used. 00:42:10.549 [2024-11-20 14:03:07.773358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:10.549 [2024-11-20 14:03:07.845181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:11.488 14:03:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:11.488 14:03:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:42:11.488 14:03:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:42:11.488 14:03:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:42:11.488 14:03:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:42:11.749 [2024-11-20 14:03:08.825497] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:42:11.749 14:03:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:42:11.749 14:03:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:42:12.018 nvme0n1 00:42:12.018 14:03:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:42:12.018 14:03:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:12.018 I/O size of 131072 is greater than zero copy threshold (65536). 00:42:12.018 Zero copy mechanism will not be used. 00:42:12.018 Running I/O for 2 seconds... 00:42:14.338 7648.00 IOPS, 956.00 MiB/s [2024-11-20T14:03:11.661Z] 7552.00 IOPS, 944.00 MiB/s 00:42:14.338 Latency(us) 00:42:14.338 [2024-11-20T14:03:11.661Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:14.338 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:42:14.338 nvme0n1 : 2.00 7549.31 943.66 0.00 0.00 2116.61 1702.79 10302.60 00:42:14.338 [2024-11-20T14:03:11.661Z] =================================================================================================================== 00:42:14.338 [2024-11-20T14:03:11.661Z] Total : 7549.31 943.66 0.00 0.00 2116.61 1702.79 10302.60 00:42:14.338 { 00:42:14.338 "results": [ 00:42:14.338 { 00:42:14.338 "job": "nvme0n1", 00:42:14.338 "core_mask": "0x2", 00:42:14.338 "workload": "randread", 00:42:14.339 "status": "finished", 00:42:14.339 "queue_depth": 16, 00:42:14.339 "io_size": 131072, 00:42:14.339 "runtime": 2.002831, 00:42:14.339 "iops": 7549.313946109282, 00:42:14.339 "mibps": 943.6642432636603, 00:42:14.339 "io_failed": 0, 00:42:14.339 "io_timeout": 0, 00:42:14.339 "avg_latency_us": 2116.6068880109055, 00:42:14.339 "min_latency_us": 1702.7912663755458, 00:42:14.339 "max_latency_us": 10302.602620087337 00:42:14.339 } 00:42:14.339 ], 00:42:14.339 "core_count": 1 00:42:14.339 } 00:42:14.339 14:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:42:14.339 14:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:42:14.339 14:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:42:14.339 14:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:42:14.339 14:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:42:14.339 | select(.opcode=="crc32c") 00:42:14.339 | "\(.module_name) \(.executed)"' 00:42:14.339 14:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:42:14.339 14:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:42:14.339 14:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:42:14.339 14:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:42:14.339 14:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80110 00:42:14.339 14:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80110 ']' 00:42:14.339 14:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80110 00:42:14.339 14:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:42:14.339 14:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:14.339 14:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80110 00:42:14.339 killing process with pid 80110 00:42:14.339 Received shutdown signal, test time was about 2.000000 seconds 00:42:14.339 00:42:14.339 Latency(us) 00:42:14.339 [2024-11-20T14:03:11.662Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:14.339 [2024-11-20T14:03:11.662Z] =================================================================================================================== 00:42:14.339 [2024-11-20T14:03:11.662Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:14.339 14:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:42:14.339 14:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:42:14.339 14:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80110' 00:42:14.339 14:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80110 00:42:14.339 14:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80110 00:42:14.599 14:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:42:14.599 14:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:42:14.599 14:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:42:14.599 14:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:42:14.599 14:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:42:14.599 14:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:42:14.599 14:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:42:14.599 14:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80170 00:42:14.599 14:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:42:14.599 14:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80170 /var/tmp/bperf.sock 00:42:14.599 14:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80170 ']' 00:42:14.599 14:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:14.599 14:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:14.599 14:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:14.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:14.599 14:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:14.599 14:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:42:14.599 [2024-11-20 14:03:11.918983] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:42:14.599 [2024-11-20 14:03:11.919120] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80170 ] 00:42:14.858 [2024-11-20 14:03:12.070337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:14.858 [2024-11-20 14:03:12.146437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:15.797 14:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:15.797 14:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:42:15.797 14:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:42:15.797 14:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:42:15.797 14:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:42:15.797 [2024-11-20 14:03:13.055463] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:42:16.057 14:03:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:42:16.058 14:03:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:42:16.317 nvme0n1 00:42:16.317 14:03:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:42:16.317 14:03:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:16.317 Running I/O for 2 seconds... 00:42:18.636 21718.00 IOPS, 84.84 MiB/s [2024-11-20T14:03:15.959Z] 21527.00 IOPS, 84.09 MiB/s 00:42:18.636 Latency(us) 00:42:18.636 [2024-11-20T14:03:15.959Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:18.636 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:18.636 nvme0n1 : 2.00 21502.77 84.00 0.00 0.00 5944.07 5551.96 13393.38 00:42:18.636 [2024-11-20T14:03:15.959Z] =================================================================================================================== 00:42:18.636 [2024-11-20T14:03:15.959Z] Total : 21502.77 84.00 0.00 0.00 5944.07 5551.96 13393.38 00:42:18.636 { 00:42:18.636 "results": [ 00:42:18.636 { 00:42:18.636 "job": "nvme0n1", 00:42:18.636 "core_mask": "0x2", 00:42:18.636 "workload": "randwrite", 00:42:18.636 "status": "finished", 00:42:18.636 "queue_depth": 128, 00:42:18.636 "io_size": 4096, 00:42:18.636 "runtime": 2.0023, 00:42:18.636 "iops": 21502.77181241572, 00:42:18.636 "mibps": 83.99520239224891, 00:42:18.636 "io_failed": 0, 00:42:18.636 "io_timeout": 0, 00:42:18.636 "avg_latency_us": 5944.0665677647, 00:42:18.636 "min_latency_us": 5551.95807860262, 00:42:18.636 "max_latency_us": 13393.383406113537 00:42:18.636 } 00:42:18.636 ], 00:42:18.636 "core_count": 1 00:42:18.636 } 00:42:18.636 14:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:42:18.636 14:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:42:18.636 14:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:42:18.636 14:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:42:18.636 | select(.opcode=="crc32c") 00:42:18.636 | "\(.module_name) \(.executed)"' 00:42:18.636 14:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:42:18.636 14:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:42:18.636 14:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:42:18.636 14:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:42:18.636 14:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:42:18.636 14:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80170 00:42:18.636 14:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80170 ']' 00:42:18.636 14:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80170 00:42:18.636 14:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:42:18.636 14:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:18.636 14:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80170 00:42:18.636 killing process with pid 80170 00:42:18.636 Received shutdown signal, test time was about 2.000000 seconds 00:42:18.636 00:42:18.636 Latency(us) 00:42:18.636 [2024-11-20T14:03:15.959Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:18.636 [2024-11-20T14:03:15.959Z] =================================================================================================================== 00:42:18.636 [2024-11-20T14:03:15.959Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:18.636 14:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:42:18.637 14:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:42:18.637 14:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80170' 00:42:18.637 14:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80170 00:42:18.637 14:03:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80170 00:42:18.897 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:42:18.897 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:42:18.897 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:42:18.897 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:42:18.897 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:42:18.897 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:42:18.897 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:42:18.897 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:42:18.897 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80234 00:42:18.897 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80234 /var/tmp/bperf.sock 00:42:18.897 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80234 ']' 00:42:18.897 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:18.897 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:18.897 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:18.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:18.897 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:18.897 14:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:42:18.897 [2024-11-20 14:03:16.121767] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:42:18.897 [2024-11-20 14:03:16.121912] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6I/O size of 131072 is greater than zero copy threshold (65536). 00:42:18.897 Zero copy mechanism will not be used. 00:42:18.897 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80234 ] 00:42:19.157 [2024-11-20 14:03:16.256053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:19.157 [2024-11-20 14:03:16.333957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:19.726 14:03:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:19.726 14:03:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:42:19.726 14:03:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:42:19.726 14:03:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:42:19.726 14:03:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:42:19.986 [2024-11-20 14:03:17.283472] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:42:20.246 14:03:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:42:20.246 14:03:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:42:20.506 nvme0n1 00:42:20.506 14:03:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:42:20.506 14:03:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:20.506 I/O size of 131072 is greater than zero copy threshold (65536). 00:42:20.506 Zero copy mechanism will not be used. 00:42:20.506 Running I/O for 2 seconds... 00:42:22.843 9062.00 IOPS, 1132.75 MiB/s [2024-11-20T14:03:20.166Z] 9055.50 IOPS, 1131.94 MiB/s 00:42:22.843 Latency(us) 00:42:22.843 [2024-11-20T14:03:20.166Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:22.843 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:42:22.843 nvme0n1 : 2.00 9049.01 1131.13 0.00 0.00 1764.46 1187.66 3920.71 00:42:22.843 [2024-11-20T14:03:20.166Z] =================================================================================================================== 00:42:22.843 [2024-11-20T14:03:20.166Z] Total : 9049.01 1131.13 0.00 0.00 1764.46 1187.66 3920.71 00:42:22.843 { 00:42:22.843 "results": [ 00:42:22.843 { 00:42:22.843 "job": "nvme0n1", 00:42:22.843 "core_mask": "0x2", 00:42:22.843 "workload": "randwrite", 00:42:22.843 "status": "finished", 00:42:22.843 "queue_depth": 16, 00:42:22.843 "io_size": 131072, 00:42:22.843 "runtime": 2.003424, 00:42:22.843 "iops": 9049.008098135992, 00:42:22.843 "mibps": 1131.126012266999, 00:42:22.843 "io_failed": 0, 00:42:22.843 "io_timeout": 0, 00:42:22.843 "avg_latency_us": 1764.4566946105074, 00:42:22.843 "min_latency_us": 1187.661135371179, 00:42:22.843 "max_latency_us": 3920.7126637554584 00:42:22.843 } 00:42:22.843 ], 00:42:22.843 "core_count": 1 00:42:22.843 } 00:42:22.843 14:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:42:22.843 14:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:42:22.843 14:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:42:22.843 14:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:42:22.843 14:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:42:22.843 | select(.opcode=="crc32c") 00:42:22.843 | "\(.module_name) \(.executed)"' 00:42:22.843 14:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:42:22.843 14:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:42:22.843 14:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:42:22.843 14:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:42:22.843 14:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80234 00:42:22.843 14:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80234 ']' 00:42:22.843 14:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80234 00:42:22.843 14:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:42:22.843 14:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:22.843 14:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80234 00:42:22.843 14:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:42:22.843 14:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:42:22.843 14:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80234' 00:42:22.843 killing process with pid 80234 00:42:22.843 14:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80234 00:42:22.843 Received shutdown signal, test time was about 2.000000 seconds 00:42:22.843 00:42:22.843 Latency(us) 00:42:22.843 [2024-11-20T14:03:20.166Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:22.843 [2024-11-20T14:03:20.166Z] =================================================================================================================== 00:42:22.843 [2024-11-20T14:03:20.166Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:22.843 14:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80234 00:42:23.103 14:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 80022 00:42:23.103 14:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80022 ']' 00:42:23.103 14:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80022 00:42:23.103 14:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:42:23.103 14:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:23.103 14:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80022 00:42:23.363 14:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:23.363 14:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:23.363 killing process with pid 80022 00:42:23.363 14:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80022' 00:42:23.363 14:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80022 00:42:23.363 14:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80022 00:42:23.623 ************************************ 00:42:23.623 END TEST nvmf_digest_clean 00:42:23.623 ************************************ 00:42:23.623 00:42:23.623 real 0m18.553s 00:42:23.623 user 0m34.854s 00:42:23.623 sys 0m5.357s 00:42:23.623 14:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:23.623 14:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:42:23.623 14:03:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:42:23.623 14:03:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:23.623 14:03:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:23.623 14:03:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:42:23.623 ************************************ 00:42:23.623 START TEST nvmf_digest_error 00:42:23.623 ************************************ 00:42:23.623 14:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:42:23.623 14:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:42:23.623 14:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:23.623 14:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:23.623 14:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:42:23.623 14:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=80320 00:42:23.623 14:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 80320 00:42:23.623 14:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:42:23.623 14:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80320 ']' 00:42:23.623 14:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:23.623 14:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:23.623 14:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:23.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:23.623 14:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:23.623 14:03:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:42:23.623 [2024-11-20 14:03:20.841840] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:42:23.623 [2024-11-20 14:03:20.841997] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:23.883 [2024-11-20 14:03:20.989491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:23.883 [2024-11-20 14:03:21.049065] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:23.883 [2024-11-20 14:03:21.049121] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:23.883 [2024-11-20 14:03:21.049127] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:23.883 [2024-11-20 14:03:21.049132] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:23.883 [2024-11-20 14:03:21.049136] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:23.883 [2024-11-20 14:03:21.049479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:24.453 14:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:24.453 14:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:42:24.453 14:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:24.453 14:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:24.453 14:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:42:24.453 14:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:24.453 14:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:42:24.453 14:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:24.453 14:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:42:24.453 [2024-11-20 14:03:21.760592] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:42:24.453 14:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:24.453 14:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:42:24.453 14:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:42:24.453 14:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:24.453 14:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:42:24.713 [2024-11-20 14:03:21.846651] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:42:24.713 null0 00:42:24.713 [2024-11-20 14:03:21.913479] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:24.713 [2024-11-20 14:03:21.937593] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:42:24.713 14:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:24.713 14:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:42:24.713 14:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:42:24.713 14:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:42:24.713 14:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:42:24.713 14:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:42:24.713 14:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80352 00:42:24.713 14:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:42:24.713 14:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80352 /var/tmp/bperf.sock 00:42:24.713 14:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80352 ']' 00:42:24.713 14:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:24.713 14:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:24.713 14:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:24.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:24.713 14:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:24.713 14:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:42:24.713 [2024-11-20 14:03:21.996986] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:42:24.713 [2024-11-20 14:03:21.997152] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80352 ] 00:42:24.980 [2024-11-20 14:03:22.145640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:24.980 [2024-11-20 14:03:22.218674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:24.980 [2024-11-20 14:03:22.294245] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:42:25.560 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:25.560 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:42:25.560 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:42:25.560 14:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:42:25.819 14:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:42:25.819 14:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:25.819 14:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:42:25.819 14:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:25.820 14:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:42:25.820 14:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:42:26.079 nvme0n1 00:42:26.079 14:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:42:26.079 14:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:26.079 14:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:42:26.079 14:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:26.079 14:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:42:26.079 14:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:26.339 Running I/O for 2 seconds... 00:42:26.339 [2024-11-20 14:03:23.491744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:26.339 [2024-11-20 14:03:23.491811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.339 [2024-11-20 14:03:23.491825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:26.339 [2024-11-20 14:03:23.505326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:26.339 [2024-11-20 14:03:23.505442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.339 [2024-11-20 14:03:23.505455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:26.339 [2024-11-20 14:03:23.519338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:26.339 [2024-11-20 14:03:23.519380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.339 [2024-11-20 14:03:23.519392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:26.339 [2024-11-20 14:03:23.533205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:26.339 [2024-11-20 14:03:23.533242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.339 [2024-11-20 14:03:23.533252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:26.339 [2024-11-20 14:03:23.546769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:26.339 [2024-11-20 14:03:23.546868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.339 [2024-11-20 14:03:23.546879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:26.339 [2024-11-20 14:03:23.560425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:26.339 [2024-11-20 14:03:23.560493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.339 [2024-11-20 14:03:23.560503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:26.339 [2024-11-20 14:03:23.573986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:26.339 [2024-11-20 14:03:23.574069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.339 [2024-11-20 14:03:23.574080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:26.339 [2024-11-20 14:03:23.587610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:26.339 [2024-11-20 14:03:23.587648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.339 [2024-11-20 14:03:23.587658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:26.339 [2024-11-20 14:03:23.601122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:26.339 [2024-11-20 14:03:23.601210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.339 [2024-11-20 14:03:23.601252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:26.339 [2024-11-20 14:03:23.614607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:26.339 [2024-11-20 14:03:23.614698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:12913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.339 [2024-11-20 14:03:23.614759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:26.339 [2024-11-20 14:03:23.628283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:26.339 [2024-11-20 14:03:23.628369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.339 [2024-11-20 14:03:23.628428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:26.339 [2024-11-20 14:03:23.641610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:26.339 [2024-11-20 14:03:23.641696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.339 [2024-11-20 14:03:23.641773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:26.339 [2024-11-20 14:03:23.655190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:26.339 [2024-11-20 14:03:23.655298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.339 [2024-11-20 14:03:23.655352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:26.599 [2024-11-20 14:03:23.669056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:26.599 [2024-11-20 14:03:23.669142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:18799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.599 [2024-11-20 14:03:23.669184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:26.599 [2024-11-20 14:03:23.682553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:26.599 [2024-11-20 14:03:23.682640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:12679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.599 [2024-11-20 14:03:23.682682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:26.600 [2024-11-20 14:03:23.696204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:26.600 [2024-11-20 14:03:23.696315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.600 [2024-11-20 14:03:23.696357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:26.600 [2024-11-20 14:03:23.709687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:26.600 [2024-11-20 14:03:23.709788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.600 [2024-11-20 14:03:23.709830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:26.600 [2024-11-20 14:03:23.723172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:26.600 [2024-11-20 14:03:23.723276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:6522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.600 [2024-11-20 14:03:23.723323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:26.600 [2024-11-20 14:03:23.736464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:26.600 [2024-11-20 14:03:23.736566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:15460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.600 [2024-11-20 14:03:23.736607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:26.600 [2024-11-20 14:03:23.749867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:26.600 [2024-11-20 14:03:23.749967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.600 [2024-11-20 14:03:23.750011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:26.600 [2024-11-20 14:03:23.763816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:26.600 [2024-11-20 14:03:23.763892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.600 [2024-11-20 14:03:23.763903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:26.600 [2024-11-20 14:03:23.777696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:26.600 [2024-11-20 14:03:23.777745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:7251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.600 [2024-11-20 14:03:23.777755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:26.600 [2024-11-20 14:03:23.791218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:26.600 [2024-11-20 14:03:23.791254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:11373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.600 [2024-11-20 14:03:23.791264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:26.600 [2024-11-20 14:03:23.804552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:26.600 [2024-11-20 14:03:23.804588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.600 [2024-11-20 14:03:23.804597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:26.600 [2024-11-20 14:03:23.818000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:26.600 [2024-11-20 14:03:23.818033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.600 [2024-11-20 14:03:23.818042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:26.600 [2024-11-20 14:03:23.831120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:26.600 [2024-11-20 14:03:23.831198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:8432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.600 [2024-11-20 14:03:23.831209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:26.600 [2024-11-20 14:03:23.844523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:26.600 [2024-11-20 14:03:23.844557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:7184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.600 [2024-11-20 14:03:23.844567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:26.600 [2024-11-20 14:03:23.857975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:26.600 [2024-11-20 14:03:23.858052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:25361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.600 [2024-11-20 14:03:23.858062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:26.600 [2024-11-20 14:03:23.871445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:26.600 [2024-11-20 14:03:23.871482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.600 [2024-11-20 14:03:23.871491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:26.600 [2024-11-20 14:03:23.885428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:26.600 [2024-11-20 14:03:23.885546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:8563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.600 [2024-11-20 14:03:23.885560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:26.600 [2024-11-20 14:03:23.900447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:26.600 [2024-11-20 14:03:23.900552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.600 [2024-11-20 14:03:23.900565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:26.600 [2024-11-20 14:03:23.915622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:26.600 [2024-11-20 14:03:23.915724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:15175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.600 [2024-11-20 14:03:23.915737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:26.861 [2024-11-20 14:03:23.930035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:26.861 [2024-11-20 14:03:23.930069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:11681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.861 [2024-11-20 14:03:23.930077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:26.861 [2024-11-20 14:03:23.943847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:26.861 [2024-11-20 14:03:23.943881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.861 [2024-11-20 14:03:23.943891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:26.861 [2024-11-20 14:03:23.957245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:26.861 [2024-11-20 14:03:23.957279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:5464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.861 [2024-11-20 14:03:23.957289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:26.861 [2024-11-20 14:03:23.970744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:26.861 [2024-11-20 14:03:23.970777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.861 [2024-11-20 14:03:23.970787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:26.861 [2024-11-20 14:03:23.984326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:26.861 [2024-11-20 14:03:23.984361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.861 [2024-11-20 14:03:23.984370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:26.861 [2024-11-20 14:03:23.997735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:26.861 [2024-11-20 14:03:23.997767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:7848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.861 [2024-11-20 14:03:23.997775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:26.861 [2024-11-20 14:03:24.011129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:26.861 [2024-11-20 14:03:24.011162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:22346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.861 [2024-11-20 14:03:24.011187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:26.861 [2024-11-20 14:03:24.024824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:26.861 [2024-11-20 14:03:24.024857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:16046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.861 [2024-11-20 14:03:24.024866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:26.861 [2024-11-20 14:03:24.038640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:26.861 [2024-11-20 14:03:24.038674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.861 [2024-11-20 14:03:24.038683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:26.861 [2024-11-20 14:03:24.052041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:26.861 [2024-11-20 14:03:24.052120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:7570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.862 [2024-11-20 14:03:24.052130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:26.862 [2024-11-20 14:03:24.065688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:26.862 [2024-11-20 14:03:24.065736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:14709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.862 [2024-11-20 14:03:24.065745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:26.862 [2024-11-20 14:03:24.078952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:26.862 [2024-11-20 14:03:24.079047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.862 [2024-11-20 14:03:24.079059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:26.862 [2024-11-20 14:03:24.092392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:26.862 [2024-11-20 14:03:24.092429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:20541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.862 [2024-11-20 14:03:24.092439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:26.862 [2024-11-20 14:03:24.106031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:26.862 [2024-11-20 14:03:24.106067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.862 [2024-11-20 14:03:24.106077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:26.862 [2024-11-20 14:03:24.119710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:26.862 [2024-11-20 14:03:24.119761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:18872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.862 [2024-11-20 14:03:24.119770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:26.862 [2024-11-20 14:03:24.132988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:26.862 [2024-11-20 14:03:24.133081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:20663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.862 [2024-11-20 14:03:24.133092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:26.862 [2024-11-20 14:03:24.146422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:26.862 [2024-11-20 14:03:24.146457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:22028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.862 [2024-11-20 14:03:24.146466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:26.862 [2024-11-20 14:03:24.160135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:26.862 [2024-11-20 14:03:24.160218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:24694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.862 [2024-11-20 14:03:24.160229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:26.862 [2024-11-20 14:03:24.174008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:26.862 [2024-11-20 14:03:24.174040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:11428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:26.862 [2024-11-20 14:03:24.174049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.122 [2024-11-20 14:03:24.187482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.122 [2024-11-20 14:03:24.187562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.122 [2024-11-20 14:03:24.187573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.122 [2024-11-20 14:03:24.201119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.122 [2024-11-20 14:03:24.201154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:8492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.122 [2024-11-20 14:03:24.201163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.122 [2024-11-20 14:03:24.214373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.122 [2024-11-20 14:03:24.214446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.122 [2024-11-20 14:03:24.214456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.122 [2024-11-20 14:03:24.228092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.122 [2024-11-20 14:03:24.228127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:2936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.122 [2024-11-20 14:03:24.228136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.122 [2024-11-20 14:03:24.241881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.122 [2024-11-20 14:03:24.241916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:21502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.122 [2024-11-20 14:03:24.241925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.122 [2024-11-20 14:03:24.255514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.122 [2024-11-20 14:03:24.255550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.122 [2024-11-20 14:03:24.255559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.122 [2024-11-20 14:03:24.269781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.122 [2024-11-20 14:03:24.269818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:11063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.122 [2024-11-20 14:03:24.269827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.122 [2024-11-20 14:03:24.284246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.122 [2024-11-20 14:03:24.284282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:5666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.122 [2024-11-20 14:03:24.284292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.123 [2024-11-20 14:03:24.298459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.123 [2024-11-20 14:03:24.298497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.123 [2024-11-20 14:03:24.298507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.123 [2024-11-20 14:03:24.312799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.123 [2024-11-20 14:03:24.312834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.123 [2024-11-20 14:03:24.312844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.123 [2024-11-20 14:03:24.327899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.123 [2024-11-20 14:03:24.327939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:10857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.123 [2024-11-20 14:03:24.327949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.123 [2024-11-20 14:03:24.342407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.123 [2024-11-20 14:03:24.342497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:15683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.123 [2024-11-20 14:03:24.342508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.123 [2024-11-20 14:03:24.362205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.123 [2024-11-20 14:03:24.362240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.123 [2024-11-20 14:03:24.362249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.123 [2024-11-20 14:03:24.375860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.123 [2024-11-20 14:03:24.375896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:3412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.123 [2024-11-20 14:03:24.375905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.123 [2024-11-20 14:03:24.389274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.123 [2024-11-20 14:03:24.389360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:21040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.123 [2024-11-20 14:03:24.389371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.123 [2024-11-20 14:03:24.402931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.123 [2024-11-20 14:03:24.402965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.123 [2024-11-20 14:03:24.402975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.123 [2024-11-20 14:03:24.416707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.123 [2024-11-20 14:03:24.416815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:18388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.123 [2024-11-20 14:03:24.416826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.123 [2024-11-20 14:03:24.430522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.123 [2024-11-20 14:03:24.430559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:8454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.123 [2024-11-20 14:03:24.430569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.383 [2024-11-20 14:03:24.444284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.383 [2024-11-20 14:03:24.444388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.383 [2024-11-20 14:03:24.444400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.383 [2024-11-20 14:03:24.458213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.383 [2024-11-20 14:03:24.458251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:11991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.383 [2024-11-20 14:03:24.458261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.383 18344.00 IOPS, 71.66 MiB/s [2024-11-20T14:03:24.706Z] [2024-11-20 14:03:24.472562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.383 [2024-11-20 14:03:24.472654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:12214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.383 [2024-11-20 14:03:24.472700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.383 [2024-11-20 14:03:24.486317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.383 [2024-11-20 14:03:24.486410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:11040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.383 [2024-11-20 14:03:24.486454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.383 [2024-11-20 14:03:24.500282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.383 [2024-11-20 14:03:24.500374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.383 [2024-11-20 14:03:24.500417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.383 [2024-11-20 14:03:24.514162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.383 [2024-11-20 14:03:24.514247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:21496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.383 [2024-11-20 14:03:24.514290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.383 [2024-11-20 14:03:24.527815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.383 [2024-11-20 14:03:24.527904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:16591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.383 [2024-11-20 14:03:24.527948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.383 [2024-11-20 14:03:24.541504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.383 [2024-11-20 14:03:24.541592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:24809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.383 [2024-11-20 14:03:24.541632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.383 [2024-11-20 14:03:24.555059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.383 [2024-11-20 14:03:24.555137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:4380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.383 [2024-11-20 14:03:24.555147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.383 [2024-11-20 14:03:24.568385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.383 [2024-11-20 14:03:24.568463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.383 [2024-11-20 14:03:24.568474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.383 [2024-11-20 14:03:24.581931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.383 [2024-11-20 14:03:24.581962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.383 [2024-11-20 14:03:24.581970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.383 [2024-11-20 14:03:24.595205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.383 [2024-11-20 14:03:24.595284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:3928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.383 [2024-11-20 14:03:24.595296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.383 [2024-11-20 14:03:24.608888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.383 [2024-11-20 14:03:24.608920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:16702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.383 [2024-11-20 14:03:24.608929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.383 [2024-11-20 14:03:24.622457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.383 [2024-11-20 14:03:24.622493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:6600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.383 [2024-11-20 14:03:24.622502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.384 [2024-11-20 14:03:24.635800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.384 [2024-11-20 14:03:24.635832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:14283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.384 [2024-11-20 14:03:24.635841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.384 [2024-11-20 14:03:24.648843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.384 [2024-11-20 14:03:24.648921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.384 [2024-11-20 14:03:24.648931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.384 [2024-11-20 14:03:24.662240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.384 [2024-11-20 14:03:24.662277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.384 [2024-11-20 14:03:24.662286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.384 [2024-11-20 14:03:24.675392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.384 [2024-11-20 14:03:24.675427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:19269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.384 [2024-11-20 14:03:24.675435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.384 [2024-11-20 14:03:24.688566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.384 [2024-11-20 14:03:24.688600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:18099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.384 [2024-11-20 14:03:24.688608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.384 [2024-11-20 14:03:24.702048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.384 [2024-11-20 14:03:24.702130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.384 [2024-11-20 14:03:24.702140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.644 [2024-11-20 14:03:24.715416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.644 [2024-11-20 14:03:24.715451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.644 [2024-11-20 14:03:24.715460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.644 [2024-11-20 14:03:24.728751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.644 [2024-11-20 14:03:24.728783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:15574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.644 [2024-11-20 14:03:24.728792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.644 [2024-11-20 14:03:24.742110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.644 [2024-11-20 14:03:24.742145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.644 [2024-11-20 14:03:24.742154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.644 [2024-11-20 14:03:24.755372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.644 [2024-11-20 14:03:24.755453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:8583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.644 [2024-11-20 14:03:24.755463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.644 [2024-11-20 14:03:24.768845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.644 [2024-11-20 14:03:24.768876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:1994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.644 [2024-11-20 14:03:24.768884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.644 [2024-11-20 14:03:24.782089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.644 [2024-11-20 14:03:24.782122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:1178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.644 [2024-11-20 14:03:24.782130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.644 [2024-11-20 14:03:24.795535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.645 [2024-11-20 14:03:24.795570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.645 [2024-11-20 14:03:24.795579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.645 [2024-11-20 14:03:24.809176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.645 [2024-11-20 14:03:24.809210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:1018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.645 [2024-11-20 14:03:24.809219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.645 [2024-11-20 14:03:24.822909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.645 [2024-11-20 14:03:24.822949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:12413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.645 [2024-11-20 14:03:24.822976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.645 [2024-11-20 14:03:24.836469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.645 [2024-11-20 14:03:24.836506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.645 [2024-11-20 14:03:24.836514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.645 [2024-11-20 14:03:24.849783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.645 [2024-11-20 14:03:24.849864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:12081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.645 [2024-11-20 14:03:24.849874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.645 [2024-11-20 14:03:24.862916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.645 [2024-11-20 14:03:24.862955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.645 [2024-11-20 14:03:24.862965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.645 [2024-11-20 14:03:24.875807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.645 [2024-11-20 14:03:24.875840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:20646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.645 [2024-11-20 14:03:24.875849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.645 [2024-11-20 14:03:24.888703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.645 [2024-11-20 14:03:24.888743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:11942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.645 [2024-11-20 14:03:24.888752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.645 [2024-11-20 14:03:24.901778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.645 [2024-11-20 14:03:24.901805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.645 [2024-11-20 14:03:24.901814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.645 [2024-11-20 14:03:24.915053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.645 [2024-11-20 14:03:24.915088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.645 [2024-11-20 14:03:24.915097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.645 [2024-11-20 14:03:24.928939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.645 [2024-11-20 14:03:24.928974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:12022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.645 [2024-11-20 14:03:24.928983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.645 [2024-11-20 14:03:24.942779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.645 [2024-11-20 14:03:24.942813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:10910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.645 [2024-11-20 14:03:24.942822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.645 [2024-11-20 14:03:24.956574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.645 [2024-11-20 14:03:24.956623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.645 [2024-11-20 14:03:24.956632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.905 [2024-11-20 14:03:24.970030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.905 [2024-11-20 14:03:24.970060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.905 [2024-11-20 14:03:24.970070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.905 [2024-11-20 14:03:24.983142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.905 [2024-11-20 14:03:24.983223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.905 [2024-11-20 14:03:24.983233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.905 [2024-11-20 14:03:24.996402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.905 [2024-11-20 14:03:24.996436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:14374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.905 [2024-11-20 14:03:24.996445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.905 [2024-11-20 14:03:25.009957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.905 [2024-11-20 14:03:25.010030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:23262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.905 [2024-11-20 14:03:25.010041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.905 [2024-11-20 14:03:25.023187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.905 [2024-11-20 14:03:25.023219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.905 [2024-11-20 14:03:25.023228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.905 [2024-11-20 14:03:25.036364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.905 [2024-11-20 14:03:25.036398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:21200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.905 [2024-11-20 14:03:25.036407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.905 [2024-11-20 14:03:25.049894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.905 [2024-11-20 14:03:25.049931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:9934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.905 [2024-11-20 14:03:25.049940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.905 [2024-11-20 14:03:25.063650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.905 [2024-11-20 14:03:25.063688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.905 [2024-11-20 14:03:25.063698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.905 [2024-11-20 14:03:25.076901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.905 [2024-11-20 14:03:25.076933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:16722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.905 [2024-11-20 14:03:25.076942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.905 [2024-11-20 14:03:25.090010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.906 [2024-11-20 14:03:25.090041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:3504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.906 [2024-11-20 14:03:25.090050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.906 [2024-11-20 14:03:25.103220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.906 [2024-11-20 14:03:25.103300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:11990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.906 [2024-11-20 14:03:25.103311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.906 [2024-11-20 14:03:25.116454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.906 [2024-11-20 14:03:25.116488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.906 [2024-11-20 14:03:25.116497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.906 [2024-11-20 14:03:25.129651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.906 [2024-11-20 14:03:25.129761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.906 [2024-11-20 14:03:25.129772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.906 [2024-11-20 14:03:25.142884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.906 [2024-11-20 14:03:25.142919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.906 [2024-11-20 14:03:25.142935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.906 [2024-11-20 14:03:25.156052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.906 [2024-11-20 14:03:25.156128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.906 [2024-11-20 14:03:25.156139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.906 [2024-11-20 14:03:25.169377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.906 [2024-11-20 14:03:25.169410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.906 [2024-11-20 14:03:25.169418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.906 [2024-11-20 14:03:25.182398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.906 [2024-11-20 14:03:25.182429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.906 [2024-11-20 14:03:25.182438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.906 [2024-11-20 14:03:25.195428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.906 [2024-11-20 14:03:25.195461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.906 [2024-11-20 14:03:25.195470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:27.906 [2024-11-20 14:03:25.208593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:27.906 [2024-11-20 14:03:25.208627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:27.906 [2024-11-20 14:03:25.208636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:28.166 [2024-11-20 14:03:25.227024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:28.166 [2024-11-20 14:03:25.227056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.166 [2024-11-20 14:03:25.227065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:28.166 [2024-11-20 14:03:25.240053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:28.166 [2024-11-20 14:03:25.240087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.166 [2024-11-20 14:03:25.240095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:28.166 [2024-11-20 14:03:25.252909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:28.166 [2024-11-20 14:03:25.252988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.166 [2024-11-20 14:03:25.252998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:28.166 [2024-11-20 14:03:25.265852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:28.166 [2024-11-20 14:03:25.265887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.166 [2024-11-20 14:03:25.265895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:28.166 [2024-11-20 14:03:25.278691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:28.166 [2024-11-20 14:03:25.278738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.166 [2024-11-20 14:03:25.278747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:28.166 [2024-11-20 14:03:25.291642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:28.166 [2024-11-20 14:03:25.291676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.166 [2024-11-20 14:03:25.291685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:28.166 [2024-11-20 14:03:25.304580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:28.166 [2024-11-20 14:03:25.304614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.166 [2024-11-20 14:03:25.304622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:28.166 [2024-11-20 14:03:25.317504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:28.166 [2024-11-20 14:03:25.317537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.166 [2024-11-20 14:03:25.317546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:28.166 [2024-11-20 14:03:25.330417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:28.166 [2024-11-20 14:03:25.330455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.166 [2024-11-20 14:03:25.330464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:28.166 [2024-11-20 14:03:25.343758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:28.166 [2024-11-20 14:03:25.343796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.166 [2024-11-20 14:03:25.343805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:28.166 [2024-11-20 14:03:25.357394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:28.166 [2024-11-20 14:03:25.357432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:18233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.166 [2024-11-20 14:03:25.357443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:28.166 [2024-11-20 14:03:25.371759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:28.166 [2024-11-20 14:03:25.371797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.166 [2024-11-20 14:03:25.371807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:28.166 [2024-11-20 14:03:25.385313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:28.166 [2024-11-20 14:03:25.385395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.166 [2024-11-20 14:03:25.385406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:28.166 [2024-11-20 14:03:25.398686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:28.167 [2024-11-20 14:03:25.398734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:25340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.167 [2024-11-20 14:03:25.398743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:28.167 [2024-11-20 14:03:25.412287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:28.167 [2024-11-20 14:03:25.412367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:24694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.167 [2024-11-20 14:03:25.412378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:28.167 [2024-11-20 14:03:25.425594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:28.167 [2024-11-20 14:03:25.425629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.167 [2024-11-20 14:03:25.425638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:28.167 [2024-11-20 14:03:25.438833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:28.167 [2024-11-20 14:03:25.438915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:15162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.167 [2024-11-20 14:03:25.438932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:28.167 [2024-11-20 14:03:25.452025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:28.167 [2024-11-20 14:03:25.452113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.167 [2024-11-20 14:03:25.452155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:28.167 [2024-11-20 14:03:25.465208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7b4230) 00:42:28.167 [2024-11-20 14:03:25.465299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:28.167 [2024-11-20 14:03:25.465344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:28.167 18659.50 IOPS, 72.89 MiB/s 00:42:28.167 Latency(us) 00:42:28.167 [2024-11-20T14:03:25.490Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:28.167 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:42:28.167 nvme0n1 : 2.01 18668.77 72.92 0.00 0.00 6851.32 6324.65 26328.87 00:42:28.167 [2024-11-20T14:03:25.490Z] =================================================================================================================== 00:42:28.167 [2024-11-20T14:03:25.490Z] Total : 18668.77 72.92 0.00 0.00 6851.32 6324.65 26328.87 00:42:28.167 { 00:42:28.167 "results": [ 00:42:28.167 { 00:42:28.167 "job": "nvme0n1", 00:42:28.167 "core_mask": "0x2", 00:42:28.167 "workload": "randread", 00:42:28.167 "status": "finished", 00:42:28.167 "queue_depth": 128, 00:42:28.167 "io_size": 4096, 00:42:28.167 "runtime": 2.005863, 00:42:28.167 "iops": 18668.772493435496, 00:42:28.167 "mibps": 72.9248925524824, 00:42:28.167 "io_failed": 0, 00:42:28.167 "io_timeout": 0, 00:42:28.167 "avg_latency_us": 6851.322603509613, 00:42:28.167 "min_latency_us": 6324.65327510917, 00:42:28.167 "max_latency_us": 26328.873362445414 00:42:28.167 } 00:42:28.167 ], 00:42:28.167 "core_count": 1 00:42:28.167 } 00:42:28.426 14:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:42:28.426 14:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:42:28.426 14:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:42:28.426 | .driver_specific 00:42:28.426 | .nvme_error 00:42:28.426 | .status_code 00:42:28.426 | .command_transient_transport_error' 00:42:28.426 14:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:42:28.426 14:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 146 > 0 )) 00:42:28.426 14:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80352 00:42:28.426 14:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80352 ']' 00:42:28.426 14:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80352 00:42:28.426 14:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:42:28.426 14:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:28.426 14:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80352 00:42:28.687 14:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:42:28.687 14:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:42:28.687 14:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80352' 00:42:28.687 killing process with pid 80352 00:42:28.687 14:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80352 00:42:28.687 Received shutdown signal, test time was about 2.000000 seconds 00:42:28.687 00:42:28.687 Latency(us) 00:42:28.687 [2024-11-20T14:03:26.010Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:28.687 [2024-11-20T14:03:26.010Z] =================================================================================================================== 00:42:28.687 [2024-11-20T14:03:26.010Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:28.687 14:03:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80352 00:42:28.946 14:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:42:28.946 14:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:42:28.946 14:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:42:28.946 14:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:42:28.946 14:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:42:28.946 14:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80407 00:42:28.946 14:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:42:28.946 14:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80407 /var/tmp/bperf.sock 00:42:28.946 14:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80407 ']' 00:42:28.946 14:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:28.946 14:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:28.946 14:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:28.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:28.946 14:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:28.946 14:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:42:28.946 [2024-11-20 14:03:26.090378] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:42:28.946 [2024-11-20 14:03:26.090535] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6I/O size of 131072 is greater than zero copy threshold (65536). 00:42:28.946 Zero copy mechanism will not be used. 00:42:28.946 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80407 ] 00:42:28.946 [2024-11-20 14:03:26.238213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:29.206 [2024-11-20 14:03:26.316711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:29.206 [2024-11-20 14:03:26.392845] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:42:29.775 14:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:29.775 14:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:42:29.775 14:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:42:29.775 14:03:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:42:30.035 14:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:42:30.035 14:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:30.035 14:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:42:30.035 14:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:30.035 14:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:42:30.035 14:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:42:30.294 nvme0n1 00:42:30.294 14:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:42:30.294 14:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:30.294 14:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:42:30.294 14:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:30.294 14:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:42:30.294 14:03:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:30.294 I/O size of 131072 is greater than zero copy threshold (65536). 00:42:30.294 Zero copy mechanism will not be used. 00:42:30.294 Running I/O for 2 seconds... 00:42:30.294 [2024-11-20 14:03:27.589299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.294 [2024-11-20 14:03:27.589916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.294 [2024-11-20 14:03:27.590001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:30.294 [2024-11-20 14:03:27.594238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.294 [2024-11-20 14:03:27.594403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.294 [2024-11-20 14:03:27.594476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:30.294 [2024-11-20 14:03:27.598538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.294 [2024-11-20 14:03:27.598696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.294 [2024-11-20 14:03:27.598898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:30.294 [2024-11-20 14:03:27.603040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.294 [2024-11-20 14:03:27.603194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.294 [2024-11-20 14:03:27.603347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:30.294 [2024-11-20 14:03:27.607763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.294 [2024-11-20 14:03:27.607943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.294 [2024-11-20 14:03:27.608084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:30.294 [2024-11-20 14:03:27.612294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.294 [2024-11-20 14:03:27.612454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.294 [2024-11-20 14:03:27.612590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:30.555 [2024-11-20 14:03:27.616956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.555 [2024-11-20 14:03:27.617125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.555 [2024-11-20 14:03:27.617258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:30.555 [2024-11-20 14:03:27.621445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.555 [2024-11-20 14:03:27.621593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.555 [2024-11-20 14:03:27.621719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:30.555 [2024-11-20 14:03:27.625831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.555 [2024-11-20 14:03:27.625981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.555 [2024-11-20 14:03:27.626108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:30.555 [2024-11-20 14:03:27.630309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.555 [2024-11-20 14:03:27.630463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.555 [2024-11-20 14:03:27.630567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:30.555 [2024-11-20 14:03:27.634761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.555 [2024-11-20 14:03:27.634923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.555 [2024-11-20 14:03:27.635088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:30.555 [2024-11-20 14:03:27.639235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.555 [2024-11-20 14:03:27.639401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.555 [2024-11-20 14:03:27.639533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:30.555 [2024-11-20 14:03:27.643695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.555 [2024-11-20 14:03:27.643867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.555 [2024-11-20 14:03:27.644006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:30.555 [2024-11-20 14:03:27.648251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.555 [2024-11-20 14:03:27.648403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.555 [2024-11-20 14:03:27.648535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:30.555 [2024-11-20 14:03:27.652729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.555 [2024-11-20 14:03:27.652888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.555 [2024-11-20 14:03:27.652987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:30.555 [2024-11-20 14:03:27.657087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.555 [2024-11-20 14:03:27.657260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.555 [2024-11-20 14:03:27.657419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:30.555 [2024-11-20 14:03:27.661608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.555 [2024-11-20 14:03:27.661770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.555 [2024-11-20 14:03:27.661967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:30.556 [2024-11-20 14:03:27.666028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.556 [2024-11-20 14:03:27.666180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.556 [2024-11-20 14:03:27.666299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:30.556 [2024-11-20 14:03:27.670476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.556 [2024-11-20 14:03:27.670613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.556 [2024-11-20 14:03:27.670741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:30.556 [2024-11-20 14:03:27.674923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.556 [2024-11-20 14:03:27.675123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.556 [2024-11-20 14:03:27.675301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:30.556 [2024-11-20 14:03:27.679511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.556 [2024-11-20 14:03:27.679700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.556 [2024-11-20 14:03:27.679859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:30.556 [2024-11-20 14:03:27.684018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.556 [2024-11-20 14:03:27.684184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.556 [2024-11-20 14:03:27.684311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:30.556 [2024-11-20 14:03:27.688516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.556 [2024-11-20 14:03:27.688668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.556 [2024-11-20 14:03:27.688843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:30.556 [2024-11-20 14:03:27.692942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.556 [2024-11-20 14:03:27.693105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.556 [2024-11-20 14:03:27.693228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:30.556 [2024-11-20 14:03:27.697314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.556 [2024-11-20 14:03:27.697460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.556 [2024-11-20 14:03:27.697580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:30.556 [2024-11-20 14:03:27.701736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.556 [2024-11-20 14:03:27.701894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.556 [2024-11-20 14:03:27.702043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:30.556 [2024-11-20 14:03:27.706228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.556 [2024-11-20 14:03:27.706376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.556 [2024-11-20 14:03:27.706493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:30.556 [2024-11-20 14:03:27.710668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.556 [2024-11-20 14:03:27.710842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.556 [2024-11-20 14:03:27.710976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:30.556 [2024-11-20 14:03:27.715117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.556 [2024-11-20 14:03:27.715278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.556 [2024-11-20 14:03:27.715394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:30.556 [2024-11-20 14:03:27.719566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.556 [2024-11-20 14:03:27.719766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.556 [2024-11-20 14:03:27.719964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:30.556 [2024-11-20 14:03:27.724104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.556 [2024-11-20 14:03:27.724202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.556 [2024-11-20 14:03:27.724386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:30.556 [2024-11-20 14:03:27.728506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.556 [2024-11-20 14:03:27.728587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.556 [2024-11-20 14:03:27.728597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:30.556 [2024-11-20 14:03:27.732628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.556 [2024-11-20 14:03:27.732735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.556 [2024-11-20 14:03:27.732778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:30.556 [2024-11-20 14:03:27.736851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.556 [2024-11-20 14:03:27.736941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.556 [2024-11-20 14:03:27.736981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:30.556 [2024-11-20 14:03:27.741106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.556 [2024-11-20 14:03:27.741196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.556 [2024-11-20 14:03:27.741236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:30.556 [2024-11-20 14:03:27.745386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.556 [2024-11-20 14:03:27.745472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.556 [2024-11-20 14:03:27.745526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:30.556 [2024-11-20 14:03:27.749676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.556 [2024-11-20 14:03:27.749782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.556 [2024-11-20 14:03:27.749824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:30.556 [2024-11-20 14:03:27.753943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.556 [2024-11-20 14:03:27.754048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.556 [2024-11-20 14:03:27.754092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:30.556 [2024-11-20 14:03:27.758262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.556 [2024-11-20 14:03:27.758346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.556 [2024-11-20 14:03:27.758386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:30.556 [2024-11-20 14:03:27.762598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.556 [2024-11-20 14:03:27.762684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.556 [2024-11-20 14:03:27.762749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:30.556 [2024-11-20 14:03:27.767013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.556 [2024-11-20 14:03:27.767105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.556 [2024-11-20 14:03:27.767177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:30.556 [2024-11-20 14:03:27.771489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.556 [2024-11-20 14:03:27.771591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.556 [2024-11-20 14:03:27.771663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:30.556 [2024-11-20 14:03:27.775900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.556 [2024-11-20 14:03:27.775992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.556 [2024-11-20 14:03:27.776059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:30.556 [2024-11-20 14:03:27.780118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.556 [2024-11-20 14:03:27.780245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.556 [2024-11-20 14:03:27.780286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:30.557 [2024-11-20 14:03:27.784374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.557 [2024-11-20 14:03:27.784461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.557 [2024-11-20 14:03:27.784517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:30.557 [2024-11-20 14:03:27.788606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.557 [2024-11-20 14:03:27.788693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.557 [2024-11-20 14:03:27.788769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:30.557 [2024-11-20 14:03:27.792898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.557 [2024-11-20 14:03:27.792986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.557 [2024-11-20 14:03:27.793025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:30.557 [2024-11-20 14:03:27.797042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.557 [2024-11-20 14:03:27.797142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.557 [2024-11-20 14:03:27.797183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:30.557 [2024-11-20 14:03:27.801356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.557 [2024-11-20 14:03:27.801462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.557 [2024-11-20 14:03:27.801509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:30.557 [2024-11-20 14:03:27.805552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.557 [2024-11-20 14:03:27.805639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.557 [2024-11-20 14:03:27.805684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:30.557 [2024-11-20 14:03:27.809912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.557 [2024-11-20 14:03:27.809951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.557 [2024-11-20 14:03:27.809959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:30.557 [2024-11-20 14:03:27.814087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.557 [2024-11-20 14:03:27.814121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.557 [2024-11-20 14:03:27.814129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:30.557 [2024-11-20 14:03:27.818281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.557 [2024-11-20 14:03:27.818316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.557 [2024-11-20 14:03:27.818325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:30.557 [2024-11-20 14:03:27.822429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.557 [2024-11-20 14:03:27.822464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.557 [2024-11-20 14:03:27.822473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:30.557 [2024-11-20 14:03:27.826556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.557 [2024-11-20 14:03:27.826593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.557 [2024-11-20 14:03:27.826602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:30.557 [2024-11-20 14:03:27.830587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.557 [2024-11-20 14:03:27.830628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.557 [2024-11-20 14:03:27.830636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:30.557 [2024-11-20 14:03:27.834665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.557 [2024-11-20 14:03:27.834699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.557 [2024-11-20 14:03:27.834736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:30.557 [2024-11-20 14:03:27.838768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.557 [2024-11-20 14:03:27.838799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.557 [2024-11-20 14:03:27.838807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:30.557 [2024-11-20 14:03:27.842849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.557 [2024-11-20 14:03:27.842884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.557 [2024-11-20 14:03:27.842893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:30.557 [2024-11-20 14:03:27.846878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.557 [2024-11-20 14:03:27.846912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.557 [2024-11-20 14:03:27.846921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:30.557 [2024-11-20 14:03:27.850948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.557 [2024-11-20 14:03:27.851004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.557 [2024-11-20 14:03:27.851012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:30.557 [2024-11-20 14:03:27.855091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.557 [2024-11-20 14:03:27.855126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.557 [2024-11-20 14:03:27.855135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:30.557 [2024-11-20 14:03:27.859149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.557 [2024-11-20 14:03:27.859186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.557 [2024-11-20 14:03:27.859195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:30.557 [2024-11-20 14:03:27.863281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.557 [2024-11-20 14:03:27.863318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.557 [2024-11-20 14:03:27.863327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:30.557 [2024-11-20 14:03:27.867480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.557 [2024-11-20 14:03:27.867563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.557 [2024-11-20 14:03:27.867572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:30.557 [2024-11-20 14:03:27.871701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.557 [2024-11-20 14:03:27.871752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.557 [2024-11-20 14:03:27.871778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:30.817 [2024-11-20 14:03:27.875865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.817 [2024-11-20 14:03:27.875903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.817 [2024-11-20 14:03:27.875912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:30.817 [2024-11-20 14:03:27.880012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.817 [2024-11-20 14:03:27.880050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.817 [2024-11-20 14:03:27.880059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:30.817 [2024-11-20 14:03:27.884115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.817 [2024-11-20 14:03:27.884152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.817 [2024-11-20 14:03:27.884161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:30.817 [2024-11-20 14:03:27.888197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.817 [2024-11-20 14:03:27.888233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.817 [2024-11-20 14:03:27.888242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:30.817 [2024-11-20 14:03:27.892356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.817 [2024-11-20 14:03:27.892393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.817 [2024-11-20 14:03:27.892402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:30.817 [2024-11-20 14:03:27.896534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.817 [2024-11-20 14:03:27.896570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.817 [2024-11-20 14:03:27.896579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:30.817 [2024-11-20 14:03:27.900742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.817 [2024-11-20 14:03:27.900775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.817 [2024-11-20 14:03:27.900784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:30.817 [2024-11-20 14:03:27.904758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.817 [2024-11-20 14:03:27.904791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.817 [2024-11-20 14:03:27.904799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:30.817 [2024-11-20 14:03:27.908861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.817 [2024-11-20 14:03:27.908897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.817 [2024-11-20 14:03:27.908907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:30.817 [2024-11-20 14:03:27.913066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.817 [2024-11-20 14:03:27.913103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.817 [2024-11-20 14:03:27.913112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:30.817 [2024-11-20 14:03:27.917245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.817 [2024-11-20 14:03:27.917282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.817 [2024-11-20 14:03:27.917291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:30.817 [2024-11-20 14:03:27.921517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.817 [2024-11-20 14:03:27.921553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.817 [2024-11-20 14:03:27.921561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:30.817 [2024-11-20 14:03:27.925690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.817 [2024-11-20 14:03:27.925785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.817 [2024-11-20 14:03:27.925794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:30.817 [2024-11-20 14:03:27.930051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.817 [2024-11-20 14:03:27.930090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.817 [2024-11-20 14:03:27.930113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:30.817 [2024-11-20 14:03:27.934224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.817 [2024-11-20 14:03:27.934260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.817 [2024-11-20 14:03:27.934269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:30.817 [2024-11-20 14:03:27.938278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.817 [2024-11-20 14:03:27.938314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.817 [2024-11-20 14:03:27.938323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:30.817 [2024-11-20 14:03:27.942445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.817 [2024-11-20 14:03:27.942482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.817 [2024-11-20 14:03:27.942491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:30.818 [2024-11-20 14:03:27.946575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.818 [2024-11-20 14:03:27.946609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.818 [2024-11-20 14:03:27.946617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:30.818 [2024-11-20 14:03:27.950687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.818 [2024-11-20 14:03:27.950732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.818 [2024-11-20 14:03:27.950742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:30.818 [2024-11-20 14:03:27.954749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.818 [2024-11-20 14:03:27.954783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.818 [2024-11-20 14:03:27.954792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:30.818 [2024-11-20 14:03:27.958860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.818 [2024-11-20 14:03:27.958895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.818 [2024-11-20 14:03:27.958904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:30.818 [2024-11-20 14:03:27.963031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.818 [2024-11-20 14:03:27.963068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.818 [2024-11-20 14:03:27.963077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:30.818 [2024-11-20 14:03:27.967180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.818 [2024-11-20 14:03:27.967224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.818 [2024-11-20 14:03:27.967233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:30.818 [2024-11-20 14:03:27.971362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.818 [2024-11-20 14:03:27.971399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.818 [2024-11-20 14:03:27.971408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:30.818 [2024-11-20 14:03:27.975500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.818 [2024-11-20 14:03:27.975537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.818 [2024-11-20 14:03:27.975546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:30.818 [2024-11-20 14:03:27.979617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.818 [2024-11-20 14:03:27.979654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.818 [2024-11-20 14:03:27.979662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:30.818 [2024-11-20 14:03:27.983679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.818 [2024-11-20 14:03:27.983733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.818 [2024-11-20 14:03:27.983743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:30.818 [2024-11-20 14:03:27.987701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.818 [2024-11-20 14:03:27.987748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.818 [2024-11-20 14:03:27.987757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:30.818 [2024-11-20 14:03:27.991738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.818 [2024-11-20 14:03:27.991772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.818 [2024-11-20 14:03:27.991782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:30.818 [2024-11-20 14:03:27.995771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.818 [2024-11-20 14:03:27.995806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.818 [2024-11-20 14:03:27.995815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:30.818 [2024-11-20 14:03:27.999842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.818 [2024-11-20 14:03:27.999879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.818 [2024-11-20 14:03:27.999889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:30.818 [2024-11-20 14:03:28.003831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.818 [2024-11-20 14:03:28.003868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.818 [2024-11-20 14:03:28.003877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:30.818 [2024-11-20 14:03:28.007986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.818 [2024-11-20 14:03:28.008024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.818 [2024-11-20 14:03:28.008033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:30.818 [2024-11-20 14:03:28.012248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.818 [2024-11-20 14:03:28.012285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.818 [2024-11-20 14:03:28.012294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:30.818 [2024-11-20 14:03:28.016613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.818 [2024-11-20 14:03:28.016652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.818 [2024-11-20 14:03:28.016661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:30.818 [2024-11-20 14:03:28.021006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.818 [2024-11-20 14:03:28.021044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.818 [2024-11-20 14:03:28.021054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:30.818 [2024-11-20 14:03:28.025346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.818 [2024-11-20 14:03:28.025384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.818 [2024-11-20 14:03:28.025393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:30.818 [2024-11-20 14:03:28.029652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.818 [2024-11-20 14:03:28.029690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.818 [2024-11-20 14:03:28.029699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:30.818 [2024-11-20 14:03:28.033854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.818 [2024-11-20 14:03:28.033893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.818 [2024-11-20 14:03:28.033903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:30.818 [2024-11-20 14:03:28.038100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.818 [2024-11-20 14:03:28.038136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.818 [2024-11-20 14:03:28.038145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:30.818 [2024-11-20 14:03:28.042362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.818 [2024-11-20 14:03:28.042396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.818 [2024-11-20 14:03:28.042405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:30.818 [2024-11-20 14:03:28.046606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.818 [2024-11-20 14:03:28.046640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.818 [2024-11-20 14:03:28.046649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:30.818 [2024-11-20 14:03:28.050771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.818 [2024-11-20 14:03:28.050805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.818 [2024-11-20 14:03:28.050813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:30.818 [2024-11-20 14:03:28.054892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.818 [2024-11-20 14:03:28.054932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.818 [2024-11-20 14:03:28.054942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:30.819 [2024-11-20 14:03:28.058911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.819 [2024-11-20 14:03:28.058952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.819 [2024-11-20 14:03:28.058977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:30.819 [2024-11-20 14:03:28.063093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.819 [2024-11-20 14:03:28.063131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.819 [2024-11-20 14:03:28.063139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:30.819 [2024-11-20 14:03:28.067131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.819 [2024-11-20 14:03:28.067168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.819 [2024-11-20 14:03:28.067177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:30.819 [2024-11-20 14:03:28.071269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.819 [2024-11-20 14:03:28.071307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.819 [2024-11-20 14:03:28.071317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:30.819 [2024-11-20 14:03:28.075315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.819 [2024-11-20 14:03:28.075352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.819 [2024-11-20 14:03:28.075361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:30.819 [2024-11-20 14:03:28.079424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.819 [2024-11-20 14:03:28.079515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.819 [2024-11-20 14:03:28.079526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:30.819 [2024-11-20 14:03:28.083591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.819 [2024-11-20 14:03:28.083628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.819 [2024-11-20 14:03:28.083638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:30.819 [2024-11-20 14:03:28.087703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.819 [2024-11-20 14:03:28.087750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.819 [2024-11-20 14:03:28.087760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:30.819 [2024-11-20 14:03:28.091798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.819 [2024-11-20 14:03:28.091833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.819 [2024-11-20 14:03:28.091842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:30.819 [2024-11-20 14:03:28.095778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.819 [2024-11-20 14:03:28.095813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.819 [2024-11-20 14:03:28.095821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:30.819 [2024-11-20 14:03:28.099755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.819 [2024-11-20 14:03:28.099791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.819 [2024-11-20 14:03:28.099800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:30.819 [2024-11-20 14:03:28.103831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.819 [2024-11-20 14:03:28.103869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.819 [2024-11-20 14:03:28.103878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:30.819 [2024-11-20 14:03:28.107888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.819 [2024-11-20 14:03:28.107926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.819 [2024-11-20 14:03:28.107934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:30.819 [2024-11-20 14:03:28.111913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.819 [2024-11-20 14:03:28.111950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.819 [2024-11-20 14:03:28.111959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:30.819 [2024-11-20 14:03:28.115912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.819 [2024-11-20 14:03:28.115950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.819 [2024-11-20 14:03:28.115959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:30.819 [2024-11-20 14:03:28.119953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.819 [2024-11-20 14:03:28.119992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.819 [2024-11-20 14:03:28.120002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:30.819 [2024-11-20 14:03:28.124110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.819 [2024-11-20 14:03:28.124148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.819 [2024-11-20 14:03:28.124157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:30.819 [2024-11-20 14:03:28.128159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.819 [2024-11-20 14:03:28.128197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.819 [2024-11-20 14:03:28.128206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:30.819 [2024-11-20 14:03:28.132107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.819 [2024-11-20 14:03:28.132143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.819 [2024-11-20 14:03:28.132153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:30.819 [2024-11-20 14:03:28.136279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:30.819 [2024-11-20 14:03:28.136316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:30.819 [2024-11-20 14:03:28.136325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:31.080 [2024-11-20 14:03:28.140493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.080 [2024-11-20 14:03:28.140530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.080 [2024-11-20 14:03:28.140538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:31.080 [2024-11-20 14:03:28.144605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.080 [2024-11-20 14:03:28.144640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.080 [2024-11-20 14:03:28.144649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:31.080 [2024-11-20 14:03:28.148739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.080 [2024-11-20 14:03:28.148772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.080 [2024-11-20 14:03:28.148781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:31.080 [2024-11-20 14:03:28.152728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.080 [2024-11-20 14:03:28.152762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.080 [2024-11-20 14:03:28.152770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:31.080 [2024-11-20 14:03:28.156871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.080 [2024-11-20 14:03:28.156923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.080 [2024-11-20 14:03:28.156932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:31.080 [2024-11-20 14:03:28.160982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.080 [2024-11-20 14:03:28.161018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.080 [2024-11-20 14:03:28.161027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:31.080 [2024-11-20 14:03:28.165085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.080 [2024-11-20 14:03:28.165120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.080 [2024-11-20 14:03:28.165129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:31.080 [2024-11-20 14:03:28.169207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.080 [2024-11-20 14:03:28.169243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.080 [2024-11-20 14:03:28.169251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:31.080 [2024-11-20 14:03:28.173317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.080 [2024-11-20 14:03:28.173352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.080 [2024-11-20 14:03:28.173361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:31.080 [2024-11-20 14:03:28.177514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.081 [2024-11-20 14:03:28.177613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.081 [2024-11-20 14:03:28.177624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:31.081 [2024-11-20 14:03:28.181660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.081 [2024-11-20 14:03:28.181697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.081 [2024-11-20 14:03:28.181737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:31.081 [2024-11-20 14:03:28.185800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.081 [2024-11-20 14:03:28.185833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.081 [2024-11-20 14:03:28.185842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:31.081 [2024-11-20 14:03:28.189865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.081 [2024-11-20 14:03:28.189900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.081 [2024-11-20 14:03:28.189909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:31.081 [2024-11-20 14:03:28.193826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.081 [2024-11-20 14:03:28.193861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.081 [2024-11-20 14:03:28.193869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:31.081 [2024-11-20 14:03:28.197725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.081 [2024-11-20 14:03:28.197770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.081 [2024-11-20 14:03:28.197778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:31.081 [2024-11-20 14:03:28.201686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.081 [2024-11-20 14:03:28.201735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.081 [2024-11-20 14:03:28.201761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:31.081 [2024-11-20 14:03:28.205746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.081 [2024-11-20 14:03:28.205778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.081 [2024-11-20 14:03:28.205786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:31.081 [2024-11-20 14:03:28.209730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.081 [2024-11-20 14:03:28.209762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.081 [2024-11-20 14:03:28.209770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:31.081 [2024-11-20 14:03:28.213781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.081 [2024-11-20 14:03:28.213814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.081 [2024-11-20 14:03:28.213823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:31.081 [2024-11-20 14:03:28.217829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.081 [2024-11-20 14:03:28.217865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.081 [2024-11-20 14:03:28.217873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:31.081 [2024-11-20 14:03:28.221838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.081 [2024-11-20 14:03:28.221870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.081 [2024-11-20 14:03:28.221878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:31.081 [2024-11-20 14:03:28.225926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.081 [2024-11-20 14:03:28.225961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.081 [2024-11-20 14:03:28.225970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:31.081 [2024-11-20 14:03:28.229982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.081 [2024-11-20 14:03:28.230018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.081 [2024-11-20 14:03:28.230026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:31.081 [2024-11-20 14:03:28.234062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.081 [2024-11-20 14:03:28.234099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.081 [2024-11-20 14:03:28.234108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:31.081 [2024-11-20 14:03:28.238068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.081 [2024-11-20 14:03:28.238103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.081 [2024-11-20 14:03:28.238112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:31.081 [2024-11-20 14:03:28.242193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.081 [2024-11-20 14:03:28.242230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.081 [2024-11-20 14:03:28.242239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:31.081 [2024-11-20 14:03:28.246274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.081 [2024-11-20 14:03:28.246309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.081 [2024-11-20 14:03:28.246317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:31.081 [2024-11-20 14:03:28.250360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.081 [2024-11-20 14:03:28.250396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.081 [2024-11-20 14:03:28.250406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:31.081 [2024-11-20 14:03:28.254456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.081 [2024-11-20 14:03:28.254489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.081 [2024-11-20 14:03:28.254498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:31.081 [2024-11-20 14:03:28.258527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.081 [2024-11-20 14:03:28.258561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.081 [2024-11-20 14:03:28.258570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:31.081 [2024-11-20 14:03:28.262581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.081 [2024-11-20 14:03:28.262614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.081 [2024-11-20 14:03:28.262623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:31.081 [2024-11-20 14:03:28.266699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.082 [2024-11-20 14:03:28.266742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.082 [2024-11-20 14:03:28.266751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:31.082 [2024-11-20 14:03:28.270887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.082 [2024-11-20 14:03:28.270922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.082 [2024-11-20 14:03:28.270938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:31.082 [2024-11-20 14:03:28.275009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.082 [2024-11-20 14:03:28.275045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.082 [2024-11-20 14:03:28.275054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:31.082 [2024-11-20 14:03:28.279104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.082 [2024-11-20 14:03:28.279140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.082 [2024-11-20 14:03:28.279150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:31.082 [2024-11-20 14:03:28.283149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.082 [2024-11-20 14:03:28.283186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.082 [2024-11-20 14:03:28.283195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:31.082 [2024-11-20 14:03:28.287116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.082 [2024-11-20 14:03:28.287152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.082 [2024-11-20 14:03:28.287161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:31.082 [2024-11-20 14:03:28.291189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.082 [2024-11-20 14:03:28.291226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.082 [2024-11-20 14:03:28.291235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:31.082 [2024-11-20 14:03:28.295320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.082 [2024-11-20 14:03:28.295358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.082 [2024-11-20 14:03:28.295366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:31.082 [2024-11-20 14:03:28.299374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.082 [2024-11-20 14:03:28.299457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.082 [2024-11-20 14:03:28.299466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:31.082 [2024-11-20 14:03:28.303551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.082 [2024-11-20 14:03:28.303591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.082 [2024-11-20 14:03:28.303600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:31.082 [2024-11-20 14:03:28.307621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.082 [2024-11-20 14:03:28.307661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.082 [2024-11-20 14:03:28.307670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:31.082 [2024-11-20 14:03:28.311585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.082 [2024-11-20 14:03:28.311625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.082 [2024-11-20 14:03:28.311634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:31.082 [2024-11-20 14:03:28.315591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.082 [2024-11-20 14:03:28.315628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.082 [2024-11-20 14:03:28.315638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:31.082 [2024-11-20 14:03:28.319780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.082 [2024-11-20 14:03:28.319828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.082 [2024-11-20 14:03:28.319836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:31.082 [2024-11-20 14:03:28.323815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.082 [2024-11-20 14:03:28.323850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.082 [2024-11-20 14:03:28.323860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:31.082 [2024-11-20 14:03:28.327826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.082 [2024-11-20 14:03:28.327862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.082 [2024-11-20 14:03:28.327870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:31.082 [2024-11-20 14:03:28.331781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.082 [2024-11-20 14:03:28.331816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.082 [2024-11-20 14:03:28.331824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:31.082 [2024-11-20 14:03:28.335740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.082 [2024-11-20 14:03:28.335775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.082 [2024-11-20 14:03:28.335784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:31.082 [2024-11-20 14:03:28.339752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.082 [2024-11-20 14:03:28.339786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.082 [2024-11-20 14:03:28.339795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:31.082 [2024-11-20 14:03:28.343869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.082 [2024-11-20 14:03:28.343934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.082 [2024-11-20 14:03:28.343946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:31.082 [2024-11-20 14:03:28.347996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.082 [2024-11-20 14:03:28.348033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.082 [2024-11-20 14:03:28.348042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:31.082 [2024-11-20 14:03:28.352010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.082 [2024-11-20 14:03:28.352047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.082 [2024-11-20 14:03:28.352056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:31.082 [2024-11-20 14:03:28.355989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.082 [2024-11-20 14:03:28.356026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.082 [2024-11-20 14:03:28.356034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:31.082 [2024-11-20 14:03:28.360079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.082 [2024-11-20 14:03:28.360116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.082 [2024-11-20 14:03:28.360126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:31.082 [2024-11-20 14:03:28.364154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.082 [2024-11-20 14:03:28.364191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.082 [2024-11-20 14:03:28.364199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:31.082 [2024-11-20 14:03:28.368160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.083 [2024-11-20 14:03:28.368210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.083 [2024-11-20 14:03:28.368219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:31.083 [2024-11-20 14:03:28.372241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.083 [2024-11-20 14:03:28.372277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.083 [2024-11-20 14:03:28.372285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:31.083 [2024-11-20 14:03:28.376329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.083 [2024-11-20 14:03:28.376367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.083 [2024-11-20 14:03:28.376375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:31.083 [2024-11-20 14:03:28.380458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.083 [2024-11-20 14:03:28.380563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.083 [2024-11-20 14:03:28.380607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:31.083 [2024-11-20 14:03:28.384666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.083 [2024-11-20 14:03:28.384789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.083 [2024-11-20 14:03:28.384833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:31.083 [2024-11-20 14:03:28.388896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.083 [2024-11-20 14:03:28.389001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.083 [2024-11-20 14:03:28.389041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:31.083 [2024-11-20 14:03:28.393251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.083 [2024-11-20 14:03:28.393287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.083 [2024-11-20 14:03:28.393296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:31.083 [2024-11-20 14:03:28.397385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.083 [2024-11-20 14:03:28.397422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.083 [2024-11-20 14:03:28.397430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:31.345 [2024-11-20 14:03:28.401496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.345 [2024-11-20 14:03:28.401532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.345 [2024-11-20 14:03:28.401541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:31.345 [2024-11-20 14:03:28.405578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.345 [2024-11-20 14:03:28.405657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.345 [2024-11-20 14:03:28.405683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:31.345 [2024-11-20 14:03:28.409677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.345 [2024-11-20 14:03:28.409786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.345 [2024-11-20 14:03:28.409797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:31.345 [2024-11-20 14:03:28.413740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.345 [2024-11-20 14:03:28.413774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.345 [2024-11-20 14:03:28.413783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:31.345 [2024-11-20 14:03:28.417835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.345 [2024-11-20 14:03:28.417871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.345 [2024-11-20 14:03:28.417879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:31.345 [2024-11-20 14:03:28.421887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.345 [2024-11-20 14:03:28.421922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.345 [2024-11-20 14:03:28.421931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:31.345 [2024-11-20 14:03:28.425951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.345 [2024-11-20 14:03:28.425988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.345 [2024-11-20 14:03:28.425996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:31.345 [2024-11-20 14:03:28.430076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.345 [2024-11-20 14:03:28.430114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.345 [2024-11-20 14:03:28.430123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:31.345 [2024-11-20 14:03:28.434164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.345 [2024-11-20 14:03:28.434201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.345 [2024-11-20 14:03:28.434210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:31.345 [2024-11-20 14:03:28.438196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.345 [2024-11-20 14:03:28.438235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.345 [2024-11-20 14:03:28.438244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:31.345 [2024-11-20 14:03:28.442307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.345 [2024-11-20 14:03:28.442343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.345 [2024-11-20 14:03:28.442352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:31.345 [2024-11-20 14:03:28.446435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.345 [2024-11-20 14:03:28.446470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.345 [2024-11-20 14:03:28.446479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:31.345 [2024-11-20 14:03:28.450543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.345 [2024-11-20 14:03:28.450579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.345 [2024-11-20 14:03:28.450588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:31.345 [2024-11-20 14:03:28.454891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.345 [2024-11-20 14:03:28.454950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.345 [2024-11-20 14:03:28.454960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:31.345 [2024-11-20 14:03:28.459314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.345 [2024-11-20 14:03:28.459353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.345 [2024-11-20 14:03:28.459364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:31.345 [2024-11-20 14:03:28.463547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.345 [2024-11-20 14:03:28.463588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.345 [2024-11-20 14:03:28.463597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:31.345 [2024-11-20 14:03:28.467846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.345 [2024-11-20 14:03:28.467884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.345 [2024-11-20 14:03:28.467893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:31.345 [2024-11-20 14:03:28.472063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.345 [2024-11-20 14:03:28.472101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.345 [2024-11-20 14:03:28.472110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:31.345 [2024-11-20 14:03:28.476254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.345 [2024-11-20 14:03:28.476291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.345 [2024-11-20 14:03:28.476300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:31.345 [2024-11-20 14:03:28.480401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.345 [2024-11-20 14:03:28.480438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.345 [2024-11-20 14:03:28.480447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:31.345 [2024-11-20 14:03:28.484539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.345 [2024-11-20 14:03:28.484575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.345 [2024-11-20 14:03:28.484585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:31.346 [2024-11-20 14:03:28.488579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.346 [2024-11-20 14:03:28.488615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.346 [2024-11-20 14:03:28.488623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:31.346 [2024-11-20 14:03:28.492697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.346 [2024-11-20 14:03:28.492746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.346 [2024-11-20 14:03:28.492755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:31.346 [2024-11-20 14:03:28.496994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.346 [2024-11-20 14:03:28.497033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.346 [2024-11-20 14:03:28.497043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:31.346 [2024-11-20 14:03:28.501307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.346 [2024-11-20 14:03:28.501345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.346 [2024-11-20 14:03:28.501355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:31.346 [2024-11-20 14:03:28.505724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.346 [2024-11-20 14:03:28.505755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.346 [2024-11-20 14:03:28.505764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:31.346 [2024-11-20 14:03:28.509923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.346 [2024-11-20 14:03:28.509960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.346 [2024-11-20 14:03:28.509969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:31.346 [2024-11-20 14:03:28.513970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.346 [2024-11-20 14:03:28.514025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.346 [2024-11-20 14:03:28.514034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:31.346 [2024-11-20 14:03:28.518090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.346 [2024-11-20 14:03:28.518127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.346 [2024-11-20 14:03:28.518137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:31.346 [2024-11-20 14:03:28.522319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.346 [2024-11-20 14:03:28.522352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.346 [2024-11-20 14:03:28.522360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:31.346 [2024-11-20 14:03:28.526398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.346 [2024-11-20 14:03:28.526434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.346 [2024-11-20 14:03:28.526443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:31.346 [2024-11-20 14:03:28.530465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.346 [2024-11-20 14:03:28.530499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.346 [2024-11-20 14:03:28.530508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:31.346 [2024-11-20 14:03:28.534619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.346 [2024-11-20 14:03:28.534655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.346 [2024-11-20 14:03:28.534664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:31.346 [2024-11-20 14:03:28.538686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.346 [2024-11-20 14:03:28.538730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.346 [2024-11-20 14:03:28.538739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:31.346 [2024-11-20 14:03:28.542699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.346 [2024-11-20 14:03:28.542745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.346 [2024-11-20 14:03:28.542771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:31.346 [2024-11-20 14:03:28.546796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.346 [2024-11-20 14:03:28.546831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.346 [2024-11-20 14:03:28.546839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:31.346 [2024-11-20 14:03:28.550831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.346 [2024-11-20 14:03:28.550864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.346 [2024-11-20 14:03:28.550873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:31.346 [2024-11-20 14:03:28.554966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.346 [2024-11-20 14:03:28.555002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.346 [2024-11-20 14:03:28.555011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:31.346 [2024-11-20 14:03:28.559015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.346 [2024-11-20 14:03:28.559052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.346 [2024-11-20 14:03:28.559061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:31.346 [2024-11-20 14:03:28.563062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.346 [2024-11-20 14:03:28.563100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.346 [2024-11-20 14:03:28.563109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:31.346 [2024-11-20 14:03:28.567154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.346 [2024-11-20 14:03:28.567192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.346 [2024-11-20 14:03:28.567202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:31.346 [2024-11-20 14:03:28.571230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.346 [2024-11-20 14:03:28.571266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.346 [2024-11-20 14:03:28.571275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:31.346 [2024-11-20 14:03:28.575361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.346 [2024-11-20 14:03:28.575399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.346 [2024-11-20 14:03:28.575408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:31.346 [2024-11-20 14:03:28.579611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.346 [2024-11-20 14:03:28.579653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.346 [2024-11-20 14:03:28.579664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:31.346 7378.00 IOPS, 922.25 MiB/s [2024-11-20T14:03:28.669Z] [2024-11-20 14:03:28.584499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.346 [2024-11-20 14:03:28.584540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.346 [2024-11-20 14:03:28.584549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:31.346 [2024-11-20 14:03:28.588686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.346 [2024-11-20 14:03:28.588732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.346 [2024-11-20 14:03:28.588759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:31.346 [2024-11-20 14:03:28.592876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.346 [2024-11-20 14:03:28.592913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.346 [2024-11-20 14:03:28.592922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:31.346 [2024-11-20 14:03:28.597048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.346 [2024-11-20 14:03:28.597085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.346 [2024-11-20 14:03:28.597094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:31.347 [2024-11-20 14:03:28.601102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.347 [2024-11-20 14:03:28.601141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.347 [2024-11-20 14:03:28.601150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:31.347 [2024-11-20 14:03:28.605196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.347 [2024-11-20 14:03:28.605247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.347 [2024-11-20 14:03:28.605256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:31.347 [2024-11-20 14:03:28.609239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.347 [2024-11-20 14:03:28.609276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.347 [2024-11-20 14:03:28.609285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:31.347 [2024-11-20 14:03:28.613339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.347 [2024-11-20 14:03:28.613377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.347 [2024-11-20 14:03:28.613385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:31.347 [2024-11-20 14:03:28.617432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.347 [2024-11-20 14:03:28.617469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.347 [2024-11-20 14:03:28.617478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:31.347 [2024-11-20 14:03:28.621684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.347 [2024-11-20 14:03:28.621736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.347 [2024-11-20 14:03:28.621747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:31.347 [2024-11-20 14:03:28.625762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.347 [2024-11-20 14:03:28.625794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.347 [2024-11-20 14:03:28.625803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:31.347 [2024-11-20 14:03:28.629771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.347 [2024-11-20 14:03:28.629805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.347 [2024-11-20 14:03:28.629814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:31.347 [2024-11-20 14:03:28.633867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.347 [2024-11-20 14:03:28.633901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.347 [2024-11-20 14:03:28.633910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:31.347 [2024-11-20 14:03:28.637920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.347 [2024-11-20 14:03:28.637957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.347 [2024-11-20 14:03:28.637965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:31.347 [2024-11-20 14:03:28.642025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.347 [2024-11-20 14:03:28.642059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.347 [2024-11-20 14:03:28.642068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:31.347 [2024-11-20 14:03:28.646045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.347 [2024-11-20 14:03:28.646079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.347 [2024-11-20 14:03:28.646088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:31.347 [2024-11-20 14:03:28.650124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.347 [2024-11-20 14:03:28.650158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.347 [2024-11-20 14:03:28.650167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:31.347 [2024-11-20 14:03:28.654206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.347 [2024-11-20 14:03:28.654241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.347 [2024-11-20 14:03:28.654249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:31.347 [2024-11-20 14:03:28.658301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.347 [2024-11-20 14:03:28.658338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.347 [2024-11-20 14:03:28.658347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:31.347 [2024-11-20 14:03:28.662392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.347 [2024-11-20 14:03:28.662426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.347 [2024-11-20 14:03:28.662434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:31.609 [2024-11-20 14:03:28.666438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.609 [2024-11-20 14:03:28.666472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.609 [2024-11-20 14:03:28.666480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:31.609 [2024-11-20 14:03:28.670421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.609 [2024-11-20 14:03:28.670454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.609 [2024-11-20 14:03:28.670463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:31.609 [2024-11-20 14:03:28.674492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.609 [2024-11-20 14:03:28.674526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.609 [2024-11-20 14:03:28.674535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:31.609 [2024-11-20 14:03:28.678590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.609 [2024-11-20 14:03:28.678624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.609 [2024-11-20 14:03:28.678633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:31.609 [2024-11-20 14:03:28.682526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.609 [2024-11-20 14:03:28.682559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.609 [2024-11-20 14:03:28.682568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:31.609 [2024-11-20 14:03:28.686524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.609 [2024-11-20 14:03:28.686558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.609 [2024-11-20 14:03:28.686567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:31.609 [2024-11-20 14:03:28.690660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.609 [2024-11-20 14:03:28.690693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.609 [2024-11-20 14:03:28.690702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:31.609 [2024-11-20 14:03:28.694696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.609 [2024-11-20 14:03:28.694742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.609 [2024-11-20 14:03:28.694751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:31.609 [2024-11-20 14:03:28.698772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.609 [2024-11-20 14:03:28.698804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.609 [2024-11-20 14:03:28.698813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:31.609 [2024-11-20 14:03:28.702781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.609 [2024-11-20 14:03:28.702813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.609 [2024-11-20 14:03:28.702820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:31.609 [2024-11-20 14:03:28.706778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.609 [2024-11-20 14:03:28.706810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.609 [2024-11-20 14:03:28.706818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:31.609 [2024-11-20 14:03:28.710759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.609 [2024-11-20 14:03:28.710790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.609 [2024-11-20 14:03:28.710798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:31.609 [2024-11-20 14:03:28.714779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.609 [2024-11-20 14:03:28.714811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.609 [2024-11-20 14:03:28.714819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:31.609 [2024-11-20 14:03:28.718839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.609 [2024-11-20 14:03:28.718872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.609 [2024-11-20 14:03:28.718880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:31.609 [2024-11-20 14:03:28.722867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.609 [2024-11-20 14:03:28.722900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.609 [2024-11-20 14:03:28.722909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:31.609 [2024-11-20 14:03:28.726817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.609 [2024-11-20 14:03:28.726850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.609 [2024-11-20 14:03:28.726858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:31.609 [2024-11-20 14:03:28.730785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.609 [2024-11-20 14:03:28.730817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.609 [2024-11-20 14:03:28.730826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:31.610 [2024-11-20 14:03:28.734756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.610 [2024-11-20 14:03:28.734787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.610 [2024-11-20 14:03:28.734795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:31.610 [2024-11-20 14:03:28.738774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.610 [2024-11-20 14:03:28.738807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.610 [2024-11-20 14:03:28.738816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:31.610 [2024-11-20 14:03:28.742771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.610 [2024-11-20 14:03:28.742802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.610 [2024-11-20 14:03:28.742811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:31.610 [2024-11-20 14:03:28.746791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.610 [2024-11-20 14:03:28.746822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.610 [2024-11-20 14:03:28.746831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:31.610 [2024-11-20 14:03:28.750850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.610 [2024-11-20 14:03:28.750883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.610 [2024-11-20 14:03:28.750891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:31.610 [2024-11-20 14:03:28.754846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.610 [2024-11-20 14:03:28.754880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.610 [2024-11-20 14:03:28.754888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:31.610 [2024-11-20 14:03:28.758836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.610 [2024-11-20 14:03:28.758868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.610 [2024-11-20 14:03:28.758877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:31.610 [2024-11-20 14:03:28.762941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.610 [2024-11-20 14:03:28.762991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.610 [2024-11-20 14:03:28.762999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:31.610 [2024-11-20 14:03:28.766902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.610 [2024-11-20 14:03:28.766940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.610 [2024-11-20 14:03:28.766966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:31.610 [2024-11-20 14:03:28.770890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.610 [2024-11-20 14:03:28.770922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.610 [2024-11-20 14:03:28.770936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:31.610 [2024-11-20 14:03:28.774835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.610 [2024-11-20 14:03:28.774868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.610 [2024-11-20 14:03:28.774877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:31.610 [2024-11-20 14:03:28.778813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.610 [2024-11-20 14:03:28.778845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.610 [2024-11-20 14:03:28.778853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:31.610 [2024-11-20 14:03:28.782780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.610 [2024-11-20 14:03:28.782811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.610 [2024-11-20 14:03:28.782820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:31.610 [2024-11-20 14:03:28.786774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.610 [2024-11-20 14:03:28.786806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.610 [2024-11-20 14:03:28.786815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:31.610 [2024-11-20 14:03:28.790870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.610 [2024-11-20 14:03:28.790920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.610 [2024-11-20 14:03:28.790936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:31.610 [2024-11-20 14:03:28.794918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.610 [2024-11-20 14:03:28.794974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.610 [2024-11-20 14:03:28.794984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:31.610 [2024-11-20 14:03:28.798915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.610 [2024-11-20 14:03:28.798973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.610 [2024-11-20 14:03:28.798983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:31.610 [2024-11-20 14:03:28.803020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.610 [2024-11-20 14:03:28.803055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.610 [2024-11-20 14:03:28.803065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:31.610 [2024-11-20 14:03:28.807047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.610 [2024-11-20 14:03:28.807081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.610 [2024-11-20 14:03:28.807090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:31.610 [2024-11-20 14:03:28.811063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.610 [2024-11-20 14:03:28.811097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.610 [2024-11-20 14:03:28.811106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:31.610 [2024-11-20 14:03:28.815056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.610 [2024-11-20 14:03:28.815091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.610 [2024-11-20 14:03:28.815100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:31.610 [2024-11-20 14:03:28.819084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.610 [2024-11-20 14:03:28.819121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.610 [2024-11-20 14:03:28.819131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:31.610 [2024-11-20 14:03:28.823137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.610 [2024-11-20 14:03:28.823173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.610 [2024-11-20 14:03:28.823183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:31.610 [2024-11-20 14:03:28.827116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.610 [2024-11-20 14:03:28.827153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.610 [2024-11-20 14:03:28.827162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:31.610 [2024-11-20 14:03:28.831168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.610 [2024-11-20 14:03:28.831205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.610 [2024-11-20 14:03:28.831213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:31.610 [2024-11-20 14:03:28.835128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.610 [2024-11-20 14:03:28.835164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.610 [2024-11-20 14:03:28.835173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:31.610 [2024-11-20 14:03:28.839222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.610 [2024-11-20 14:03:28.839259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.610 [2024-11-20 14:03:28.839268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:31.611 [2024-11-20 14:03:28.843305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.611 [2024-11-20 14:03:28.843342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.611 [2024-11-20 14:03:28.843351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:31.611 [2024-11-20 14:03:28.847341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.611 [2024-11-20 14:03:28.847377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.611 [2024-11-20 14:03:28.847386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:31.611 [2024-11-20 14:03:28.851383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.611 [2024-11-20 14:03:28.851420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.611 [2024-11-20 14:03:28.851429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:31.611 [2024-11-20 14:03:28.855367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.611 [2024-11-20 14:03:28.855455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.611 [2024-11-20 14:03:28.855465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:31.611 [2024-11-20 14:03:28.859470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.611 [2024-11-20 14:03:28.859508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.611 [2024-11-20 14:03:28.859517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:31.611 [2024-11-20 14:03:28.863470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.611 [2024-11-20 14:03:28.863509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.611 [2024-11-20 14:03:28.863518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:31.611 [2024-11-20 14:03:28.867490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.611 [2024-11-20 14:03:28.867527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.611 [2024-11-20 14:03:28.867536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:31.611 [2024-11-20 14:03:28.871503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.611 [2024-11-20 14:03:28.871540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.611 [2024-11-20 14:03:28.871548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:31.611 [2024-11-20 14:03:28.875534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.611 [2024-11-20 14:03:28.875572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.611 [2024-11-20 14:03:28.875580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:31.611 [2024-11-20 14:03:28.879564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.611 [2024-11-20 14:03:28.879600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.611 [2024-11-20 14:03:28.879609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:31.611 [2024-11-20 14:03:28.883719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.611 [2024-11-20 14:03:28.883751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.611 [2024-11-20 14:03:28.883759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:31.611 [2024-11-20 14:03:28.887737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.611 [2024-11-20 14:03:28.887768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.611 [2024-11-20 14:03:28.887776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:31.611 [2024-11-20 14:03:28.891759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.611 [2024-11-20 14:03:28.891792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.611 [2024-11-20 14:03:28.891801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:31.611 [2024-11-20 14:03:28.895724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.611 [2024-11-20 14:03:28.895758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.611 [2024-11-20 14:03:28.895767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:31.611 [2024-11-20 14:03:28.899673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.611 [2024-11-20 14:03:28.899722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.611 [2024-11-20 14:03:28.899731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:31.611 [2024-11-20 14:03:28.903613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.611 [2024-11-20 14:03:28.903649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.611 [2024-11-20 14:03:28.903657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:31.611 [2024-11-20 14:03:28.907641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.611 [2024-11-20 14:03:28.907677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.611 [2024-11-20 14:03:28.907686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:31.611 [2024-11-20 14:03:28.911665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.611 [2024-11-20 14:03:28.911702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.611 [2024-11-20 14:03:28.911726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:31.611 [2024-11-20 14:03:28.915806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.611 [2024-11-20 14:03:28.915843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.611 [2024-11-20 14:03:28.915852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:31.611 [2024-11-20 14:03:28.919892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.611 [2024-11-20 14:03:28.919930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.611 [2024-11-20 14:03:28.919939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:31.611 [2024-11-20 14:03:28.923954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.611 [2024-11-20 14:03:28.923991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.611 [2024-11-20 14:03:28.924000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:31.611 [2024-11-20 14:03:28.927913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.611 [2024-11-20 14:03:28.927949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.611 [2024-11-20 14:03:28.927957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:31.873 [2024-11-20 14:03:28.931878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.873 [2024-11-20 14:03:28.931914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.873 [2024-11-20 14:03:28.931922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:31.873 [2024-11-20 14:03:28.935869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.873 [2024-11-20 14:03:28.935907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.873 [2024-11-20 14:03:28.935916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:31.873 [2024-11-20 14:03:28.939877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.873 [2024-11-20 14:03:28.939929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.873 [2024-11-20 14:03:28.939938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:31.873 [2024-11-20 14:03:28.943843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.873 [2024-11-20 14:03:28.943882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.873 [2024-11-20 14:03:28.943891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:31.873 [2024-11-20 14:03:28.947864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.873 [2024-11-20 14:03:28.947901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.873 [2024-11-20 14:03:28.947910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:31.873 [2024-11-20 14:03:28.951970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.873 [2024-11-20 14:03:28.952009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.873 [2024-11-20 14:03:28.952019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:31.873 [2024-11-20 14:03:28.956099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.873 [2024-11-20 14:03:28.956146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.873 [2024-11-20 14:03:28.956154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:31.873 [2024-11-20 14:03:28.960080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.873 [2024-11-20 14:03:28.960117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.873 [2024-11-20 14:03:28.960126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:31.873 [2024-11-20 14:03:28.964172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.873 [2024-11-20 14:03:28.964207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.873 [2024-11-20 14:03:28.964217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:31.873 [2024-11-20 14:03:28.968383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.873 [2024-11-20 14:03:28.968423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.873 [2024-11-20 14:03:28.968433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:31.873 [2024-11-20 14:03:28.972599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.873 [2024-11-20 14:03:28.972635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.873 [2024-11-20 14:03:28.972645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:31.873 [2024-11-20 14:03:28.976757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.873 [2024-11-20 14:03:28.976789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.873 [2024-11-20 14:03:28.976798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:31.873 [2024-11-20 14:03:28.980867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.873 [2024-11-20 14:03:28.980904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.873 [2024-11-20 14:03:28.980913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:31.873 [2024-11-20 14:03:28.985076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.873 [2024-11-20 14:03:28.985120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.873 [2024-11-20 14:03:28.985130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:31.873 [2024-11-20 14:03:28.989091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.873 [2024-11-20 14:03:28.989127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.873 [2024-11-20 14:03:28.989137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:31.873 [2024-11-20 14:03:28.993109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.874 [2024-11-20 14:03:28.993144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.874 [2024-11-20 14:03:28.993153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:31.874 [2024-11-20 14:03:28.997262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.874 [2024-11-20 14:03:28.997297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.874 [2024-11-20 14:03:28.997306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:31.874 [2024-11-20 14:03:29.001416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.874 [2024-11-20 14:03:29.001451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.874 [2024-11-20 14:03:29.001460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:31.874 [2024-11-20 14:03:29.005474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.874 [2024-11-20 14:03:29.005557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.874 [2024-11-20 14:03:29.005567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:31.874 [2024-11-20 14:03:29.009574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.874 [2024-11-20 14:03:29.009610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.874 [2024-11-20 14:03:29.009619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:31.874 [2024-11-20 14:03:29.013491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.874 [2024-11-20 14:03:29.013526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.874 [2024-11-20 14:03:29.013535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:31.874 [2024-11-20 14:03:29.017454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.874 [2024-11-20 14:03:29.017489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.874 [2024-11-20 14:03:29.017498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:31.874 [2024-11-20 14:03:29.021508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.874 [2024-11-20 14:03:29.021543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.874 [2024-11-20 14:03:29.021552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:31.874 [2024-11-20 14:03:29.025573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.874 [2024-11-20 14:03:29.025607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.874 [2024-11-20 14:03:29.025616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:31.874 [2024-11-20 14:03:29.029641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.874 [2024-11-20 14:03:29.029676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.874 [2024-11-20 14:03:29.029684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:31.874 [2024-11-20 14:03:29.033757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.874 [2024-11-20 14:03:29.033791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.874 [2024-11-20 14:03:29.033800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:31.874 [2024-11-20 14:03:29.037721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.874 [2024-11-20 14:03:29.037755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.874 [2024-11-20 14:03:29.037764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:31.874 [2024-11-20 14:03:29.041796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.874 [2024-11-20 14:03:29.041835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.874 [2024-11-20 14:03:29.041845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:31.874 [2024-11-20 14:03:29.045955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.874 [2024-11-20 14:03:29.045995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.874 [2024-11-20 14:03:29.046004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:31.874 [2024-11-20 14:03:29.050068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.874 [2024-11-20 14:03:29.050108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.874 [2024-11-20 14:03:29.050117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:31.874 [2024-11-20 14:03:29.054239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.874 [2024-11-20 14:03:29.054273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.874 [2024-11-20 14:03:29.054281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:31.874 [2024-11-20 14:03:29.058332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.874 [2024-11-20 14:03:29.058367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.874 [2024-11-20 14:03:29.058376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:31.874 [2024-11-20 14:03:29.062380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.874 [2024-11-20 14:03:29.062415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.874 [2024-11-20 14:03:29.062424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:31.874 [2024-11-20 14:03:29.066422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.874 [2024-11-20 14:03:29.066456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.874 [2024-11-20 14:03:29.066465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:31.874 [2024-11-20 14:03:29.070452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.874 [2024-11-20 14:03:29.070488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.874 [2024-11-20 14:03:29.070496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:31.874 [2024-11-20 14:03:29.074600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.874 [2024-11-20 14:03:29.074633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.874 [2024-11-20 14:03:29.074642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:31.874 [2024-11-20 14:03:29.078701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.874 [2024-11-20 14:03:29.078747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.874 [2024-11-20 14:03:29.078773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:31.874 [2024-11-20 14:03:29.082749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.874 [2024-11-20 14:03:29.082780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.874 [2024-11-20 14:03:29.082788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:31.874 [2024-11-20 14:03:29.086762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.874 [2024-11-20 14:03:29.086793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.874 [2024-11-20 14:03:29.086802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:31.874 [2024-11-20 14:03:29.090701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.874 [2024-11-20 14:03:29.090743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.874 [2024-11-20 14:03:29.090752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:31.874 [2024-11-20 14:03:29.094762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.874 [2024-11-20 14:03:29.094793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.874 [2024-11-20 14:03:29.094802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:31.874 [2024-11-20 14:03:29.098767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.874 [2024-11-20 14:03:29.098799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.874 [2024-11-20 14:03:29.098808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:31.875 [2024-11-20 14:03:29.102784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.875 [2024-11-20 14:03:29.102822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.875 [2024-11-20 14:03:29.102830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:31.875 [2024-11-20 14:03:29.106819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.875 [2024-11-20 14:03:29.106853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.875 [2024-11-20 14:03:29.106861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:31.875 [2024-11-20 14:03:29.110827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.875 [2024-11-20 14:03:29.110860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.875 [2024-11-20 14:03:29.110868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:31.875 [2024-11-20 14:03:29.114775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.875 [2024-11-20 14:03:29.114806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.875 [2024-11-20 14:03:29.114815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:31.875 [2024-11-20 14:03:29.118746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.875 [2024-11-20 14:03:29.118777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.875 [2024-11-20 14:03:29.118785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:31.875 [2024-11-20 14:03:29.122740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.875 [2024-11-20 14:03:29.122771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.875 [2024-11-20 14:03:29.122779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:31.875 [2024-11-20 14:03:29.126689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.875 [2024-11-20 14:03:29.126735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.875 [2024-11-20 14:03:29.126744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:31.875 [2024-11-20 14:03:29.130788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.875 [2024-11-20 14:03:29.130821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.875 [2024-11-20 14:03:29.130829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:31.875 [2024-11-20 14:03:29.134837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.875 [2024-11-20 14:03:29.134871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.875 [2024-11-20 14:03:29.134879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:31.875 [2024-11-20 14:03:29.138812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.875 [2024-11-20 14:03:29.138845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.875 [2024-11-20 14:03:29.138854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:31.875 [2024-11-20 14:03:29.142814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.875 [2024-11-20 14:03:29.142847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.875 [2024-11-20 14:03:29.142855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:31.875 [2024-11-20 14:03:29.146790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.875 [2024-11-20 14:03:29.146824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.875 [2024-11-20 14:03:29.146832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:31.875 [2024-11-20 14:03:29.150743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.875 [2024-11-20 14:03:29.150773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.875 [2024-11-20 14:03:29.150782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:31.875 [2024-11-20 14:03:29.154641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.875 [2024-11-20 14:03:29.154675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.875 [2024-11-20 14:03:29.154684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:31.875 [2024-11-20 14:03:29.158661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.875 [2024-11-20 14:03:29.158695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.875 [2024-11-20 14:03:29.158703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:31.875 [2024-11-20 14:03:29.162710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.875 [2024-11-20 14:03:29.162752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.875 [2024-11-20 14:03:29.162761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:31.875 [2024-11-20 14:03:29.166667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.875 [2024-11-20 14:03:29.166701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.875 [2024-11-20 14:03:29.166721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:31.875 [2024-11-20 14:03:29.170634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.875 [2024-11-20 14:03:29.170667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.875 [2024-11-20 14:03:29.170676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:31.875 [2024-11-20 14:03:29.174557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.875 [2024-11-20 14:03:29.174591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.875 [2024-11-20 14:03:29.174600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:31.875 [2024-11-20 14:03:29.178624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.875 [2024-11-20 14:03:29.178661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.875 [2024-11-20 14:03:29.178670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:31.875 [2024-11-20 14:03:29.182633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.875 [2024-11-20 14:03:29.182668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.875 [2024-11-20 14:03:29.182676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:31.875 [2024-11-20 14:03:29.186726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.875 [2024-11-20 14:03:29.186758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.875 [2024-11-20 14:03:29.186767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:31.875 [2024-11-20 14:03:29.190743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:31.875 [2024-11-20 14:03:29.190774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:31.875 [2024-11-20 14:03:29.190783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:32.136 [2024-11-20 14:03:29.194761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.136 [2024-11-20 14:03:29.194793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.137 [2024-11-20 14:03:29.194802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:32.137 [2024-11-20 14:03:29.198745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.137 [2024-11-20 14:03:29.198776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.137 [2024-11-20 14:03:29.198785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:32.137 [2024-11-20 14:03:29.202734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.137 [2024-11-20 14:03:29.202765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.137 [2024-11-20 14:03:29.202774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:32.137 [2024-11-20 14:03:29.206813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.137 [2024-11-20 14:03:29.206846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.137 [2024-11-20 14:03:29.206855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:32.137 [2024-11-20 14:03:29.210902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.137 [2024-11-20 14:03:29.210947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.137 [2024-11-20 14:03:29.210957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:32.137 [2024-11-20 14:03:29.214941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.137 [2024-11-20 14:03:29.215007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.137 [2024-11-20 14:03:29.215017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:32.137 [2024-11-20 14:03:29.218925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.137 [2024-11-20 14:03:29.218984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.137 [2024-11-20 14:03:29.218993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:32.137 [2024-11-20 14:03:29.222896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.137 [2024-11-20 14:03:29.222935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.137 [2024-11-20 14:03:29.222944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:32.137 [2024-11-20 14:03:29.226920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.137 [2024-11-20 14:03:29.226976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.137 [2024-11-20 14:03:29.227003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:32.137 [2024-11-20 14:03:29.231049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.137 [2024-11-20 14:03:29.231083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.137 [2024-11-20 14:03:29.231093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:32.137 [2024-11-20 14:03:29.235096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.137 [2024-11-20 14:03:29.235135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.137 [2024-11-20 14:03:29.235144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:32.137 [2024-11-20 14:03:29.239154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.137 [2024-11-20 14:03:29.239191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.137 [2024-11-20 14:03:29.239200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:32.137 [2024-11-20 14:03:29.243174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.137 [2024-11-20 14:03:29.243209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.137 [2024-11-20 14:03:29.243218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:32.137 [2024-11-20 14:03:29.247299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.137 [2024-11-20 14:03:29.247342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.137 [2024-11-20 14:03:29.247353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:32.137 [2024-11-20 14:03:29.251524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.137 [2024-11-20 14:03:29.251561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.137 [2024-11-20 14:03:29.251570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:32.137 [2024-11-20 14:03:29.255532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.137 [2024-11-20 14:03:29.255619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.137 [2024-11-20 14:03:29.255630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:32.137 [2024-11-20 14:03:29.259660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.137 [2024-11-20 14:03:29.259754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.137 [2024-11-20 14:03:29.259781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:32.137 [2024-11-20 14:03:29.263818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.137 [2024-11-20 14:03:29.263855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.137 [2024-11-20 14:03:29.263864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:32.137 [2024-11-20 14:03:29.267888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.137 [2024-11-20 14:03:29.267926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.137 [2024-11-20 14:03:29.267935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:32.137 [2024-11-20 14:03:29.271882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.137 [2024-11-20 14:03:29.271923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.137 [2024-11-20 14:03:29.271931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:32.137 [2024-11-20 14:03:29.275869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.137 [2024-11-20 14:03:29.275907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.137 [2024-11-20 14:03:29.275915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:32.137 [2024-11-20 14:03:29.279914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.137 [2024-11-20 14:03:29.279981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.137 [2024-11-20 14:03:29.279991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:32.137 [2024-11-20 14:03:29.283989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.137 [2024-11-20 14:03:29.284025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.137 [2024-11-20 14:03:29.284034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:32.137 [2024-11-20 14:03:29.287942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.137 [2024-11-20 14:03:29.287978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.137 [2024-11-20 14:03:29.287988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:32.137 [2024-11-20 14:03:29.291977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.137 [2024-11-20 14:03:29.292014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.137 [2024-11-20 14:03:29.292022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:32.137 [2024-11-20 14:03:29.296100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.137 [2024-11-20 14:03:29.296138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.137 [2024-11-20 14:03:29.296148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:32.137 [2024-11-20 14:03:29.300150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.137 [2024-11-20 14:03:29.300187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.137 [2024-11-20 14:03:29.300196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:32.137 [2024-11-20 14:03:29.304210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.138 [2024-11-20 14:03:29.304246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.138 [2024-11-20 14:03:29.304255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:32.138 [2024-11-20 14:03:29.308261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.138 [2024-11-20 14:03:29.308297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.138 [2024-11-20 14:03:29.308306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:32.138 [2024-11-20 14:03:29.312339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.138 [2024-11-20 14:03:29.312374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.138 [2024-11-20 14:03:29.312383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:32.138 [2024-11-20 14:03:29.316417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.138 [2024-11-20 14:03:29.316454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.138 [2024-11-20 14:03:29.316463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:32.138 [2024-11-20 14:03:29.320399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.138 [2024-11-20 14:03:29.320434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.138 [2024-11-20 14:03:29.320443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:32.138 [2024-11-20 14:03:29.324468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.138 [2024-11-20 14:03:29.324503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.138 [2024-11-20 14:03:29.324512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:32.138 [2024-11-20 14:03:29.328501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.138 [2024-11-20 14:03:29.328536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.138 [2024-11-20 14:03:29.328545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:32.138 [2024-11-20 14:03:29.332563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.138 [2024-11-20 14:03:29.332598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.138 [2024-11-20 14:03:29.332607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:32.138 [2024-11-20 14:03:29.336538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.138 [2024-11-20 14:03:29.336573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.138 [2024-11-20 14:03:29.336582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:32.138 [2024-11-20 14:03:29.340560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.138 [2024-11-20 14:03:29.340596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.138 [2024-11-20 14:03:29.340604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:32.138 [2024-11-20 14:03:29.344593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.138 [2024-11-20 14:03:29.344628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.138 [2024-11-20 14:03:29.344638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:32.138 [2024-11-20 14:03:29.348631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.138 [2024-11-20 14:03:29.348667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.138 [2024-11-20 14:03:29.348676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:32.138 [2024-11-20 14:03:29.352784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.138 [2024-11-20 14:03:29.352818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.138 [2024-11-20 14:03:29.352828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:32.138 [2024-11-20 14:03:29.356789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.138 [2024-11-20 14:03:29.356824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.138 [2024-11-20 14:03:29.356833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:32.138 [2024-11-20 14:03:29.360762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.138 [2024-11-20 14:03:29.360795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.138 [2024-11-20 14:03:29.360804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:32.138 [2024-11-20 14:03:29.364790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.138 [2024-11-20 14:03:29.364825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.138 [2024-11-20 14:03:29.364834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:32.138 [2024-11-20 14:03:29.368977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.138 [2024-11-20 14:03:29.369012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.138 [2024-11-20 14:03:29.369021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:32.138 [2024-11-20 14:03:29.373136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.138 [2024-11-20 14:03:29.373184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.138 [2024-11-20 14:03:29.373194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:32.138 [2024-11-20 14:03:29.377229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.138 [2024-11-20 14:03:29.377264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.138 [2024-11-20 14:03:29.377273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:32.138 [2024-11-20 14:03:29.381234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.138 [2024-11-20 14:03:29.381271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.138 [2024-11-20 14:03:29.381280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:32.138 [2024-11-20 14:03:29.385409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.138 [2024-11-20 14:03:29.385445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.138 [2024-11-20 14:03:29.385454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:32.138 [2024-11-20 14:03:29.389427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.138 [2024-11-20 14:03:29.389465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.138 [2024-11-20 14:03:29.389474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:32.138 [2024-11-20 14:03:29.393372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.138 [2024-11-20 14:03:29.393408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.138 [2024-11-20 14:03:29.393417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:32.138 [2024-11-20 14:03:29.397369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.138 [2024-11-20 14:03:29.397404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.138 [2024-11-20 14:03:29.397414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:32.138 [2024-11-20 14:03:29.401446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.138 [2024-11-20 14:03:29.401485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.138 [2024-11-20 14:03:29.401494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:32.138 [2024-11-20 14:03:29.405572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.138 [2024-11-20 14:03:29.405623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.138 [2024-11-20 14:03:29.405632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:32.138 [2024-11-20 14:03:29.409599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.138 [2024-11-20 14:03:29.409640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.138 [2024-11-20 14:03:29.409649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:32.139 [2024-11-20 14:03:29.413555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.139 [2024-11-20 14:03:29.413595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.139 [2024-11-20 14:03:29.413605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:32.139 [2024-11-20 14:03:29.417582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.139 [2024-11-20 14:03:29.417623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.139 [2024-11-20 14:03:29.417632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:32.139 [2024-11-20 14:03:29.421656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.139 [2024-11-20 14:03:29.421695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.139 [2024-11-20 14:03:29.421713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:32.139 [2024-11-20 14:03:29.425813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.139 [2024-11-20 14:03:29.425851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.139 [2024-11-20 14:03:29.425860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:32.139 [2024-11-20 14:03:29.429856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.139 [2024-11-20 14:03:29.429892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.139 [2024-11-20 14:03:29.429901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:32.139 [2024-11-20 14:03:29.433900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.139 [2024-11-20 14:03:29.433938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.139 [2024-11-20 14:03:29.433948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:32.139 [2024-11-20 14:03:29.437918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.139 [2024-11-20 14:03:29.437955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.139 [2024-11-20 14:03:29.437963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:32.139 [2024-11-20 14:03:29.441892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.139 [2024-11-20 14:03:29.441928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.139 [2024-11-20 14:03:29.441938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:32.139 [2024-11-20 14:03:29.445895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.139 [2024-11-20 14:03:29.445934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.139 [2024-11-20 14:03:29.445943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:32.139 [2024-11-20 14:03:29.449943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.139 [2024-11-20 14:03:29.449982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.139 [2024-11-20 14:03:29.449992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:32.139 [2024-11-20 14:03:29.453950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.139 [2024-11-20 14:03:29.453987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.139 [2024-11-20 14:03:29.453997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:32.398 [2024-11-20 14:03:29.458000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.398 [2024-11-20 14:03:29.458039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.398 [2024-11-20 14:03:29.458048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:32.398 [2024-11-20 14:03:29.462048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.398 [2024-11-20 14:03:29.462100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.398 [2024-11-20 14:03:29.462109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:32.398 [2024-11-20 14:03:29.466514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.398 [2024-11-20 14:03:29.466555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.398 [2024-11-20 14:03:29.466564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:32.398 [2024-11-20 14:03:29.470653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.398 [2024-11-20 14:03:29.470692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.398 [2024-11-20 14:03:29.470702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:32.398 [2024-11-20 14:03:29.474741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.398 [2024-11-20 14:03:29.474777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.398 [2024-11-20 14:03:29.474786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:32.398 [2024-11-20 14:03:29.478942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.398 [2024-11-20 14:03:29.479077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.398 [2024-11-20 14:03:29.479089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:32.398 [2024-11-20 14:03:29.483294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.398 [2024-11-20 14:03:29.483335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.398 [2024-11-20 14:03:29.483345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:32.398 [2024-11-20 14:03:29.487605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.398 [2024-11-20 14:03:29.487644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.398 [2024-11-20 14:03:29.487654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:32.398 [2024-11-20 14:03:29.492089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.398 [2024-11-20 14:03:29.492139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.398 [2024-11-20 14:03:29.492149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:32.398 [2024-11-20 14:03:29.496341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.398 [2024-11-20 14:03:29.496379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.398 [2024-11-20 14:03:29.496389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:32.398 [2024-11-20 14:03:29.500472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.398 [2024-11-20 14:03:29.500510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.398 [2024-11-20 14:03:29.500520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:32.398 [2024-11-20 14:03:29.504684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.398 [2024-11-20 14:03:29.504734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.398 [2024-11-20 14:03:29.504744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:32.398 [2024-11-20 14:03:29.509074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.398 [2024-11-20 14:03:29.509160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.399 [2024-11-20 14:03:29.509170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:32.399 [2024-11-20 14:03:29.513376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.399 [2024-11-20 14:03:29.513414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.399 [2024-11-20 14:03:29.513424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:32.399 [2024-11-20 14:03:29.517500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.399 [2024-11-20 14:03:29.517536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.399 [2024-11-20 14:03:29.517544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:32.399 [2024-11-20 14:03:29.521646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.399 [2024-11-20 14:03:29.521683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.399 [2024-11-20 14:03:29.521692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:32.399 [2024-11-20 14:03:29.525781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.399 [2024-11-20 14:03:29.525815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.399 [2024-11-20 14:03:29.525825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:32.399 [2024-11-20 14:03:29.529848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.399 [2024-11-20 14:03:29.529883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.399 [2024-11-20 14:03:29.529892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:32.399 [2024-11-20 14:03:29.533862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.399 [2024-11-20 14:03:29.533900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.399 [2024-11-20 14:03:29.533910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:32.399 [2024-11-20 14:03:29.537830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.399 [2024-11-20 14:03:29.537865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.399 [2024-11-20 14:03:29.537873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:32.399 [2024-11-20 14:03:29.541848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.399 [2024-11-20 14:03:29.541883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.399 [2024-11-20 14:03:29.541892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:32.399 [2024-11-20 14:03:29.545914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.399 [2024-11-20 14:03:29.545949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.399 [2024-11-20 14:03:29.545958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:32.399 [2024-11-20 14:03:29.549940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.399 [2024-11-20 14:03:29.549975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.399 [2024-11-20 14:03:29.549986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:32.399 [2024-11-20 14:03:29.553936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.399 [2024-11-20 14:03:29.553970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.399 [2024-11-20 14:03:29.553979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:32.399 [2024-11-20 14:03:29.557975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.399 [2024-11-20 14:03:29.558026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.399 [2024-11-20 14:03:29.558035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:32.399 [2024-11-20 14:03:29.562060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.399 [2024-11-20 14:03:29.562094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.399 [2024-11-20 14:03:29.562103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:32.399 [2024-11-20 14:03:29.566093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.399 [2024-11-20 14:03:29.566129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.399 [2024-11-20 14:03:29.566138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:32.399 [2024-11-20 14:03:29.570055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.399 [2024-11-20 14:03:29.570092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.399 [2024-11-20 14:03:29.570100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:32.399 [2024-11-20 14:03:29.573927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.399 [2024-11-20 14:03:29.573965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.399 [2024-11-20 14:03:29.573973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:32.399 [2024-11-20 14:03:29.577857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.399 [2024-11-20 14:03:29.577892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.399 [2024-11-20 14:03:29.577901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:32.399 7502.00 IOPS, 937.75 MiB/s [2024-11-20T14:03:29.722Z] [2024-11-20 14:03:29.582201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb2d400) 00:42:32.399 [2024-11-20 14:03:29.582240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:32.399 [2024-11-20 14:03:29.582249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:32.399 00:42:32.399 Latency(us) 00:42:32.399 [2024-11-20T14:03:29.722Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:32.399 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:42:32.399 nvme0n1 : 2.00 7500.39 937.55 0.00 0.00 2130.01 1903.12 8699.98 00:42:32.399 [2024-11-20T14:03:29.722Z] =================================================================================================================== 00:42:32.399 [2024-11-20T14:03:29.722Z] Total : 7500.39 937.55 0.00 0.00 2130.01 1903.12 8699.98 00:42:32.399 { 00:42:32.399 "results": [ 00:42:32.399 { 00:42:32.399 "job": "nvme0n1", 00:42:32.399 "core_mask": "0x2", 00:42:32.399 "workload": "randread", 00:42:32.399 "status": "finished", 00:42:32.399 "queue_depth": 16, 00:42:32.399 "io_size": 131072, 00:42:32.399 "runtime": 2.002563, 00:42:32.399 "iops": 7500.38825245448, 00:42:32.399 "mibps": 937.54853155681, 00:42:32.399 "io_failed": 0, 00:42:32.399 "io_timeout": 0, 00:42:32.399 "avg_latency_us": 2130.013838433762, 00:42:32.399 "min_latency_us": 1903.1196506550218, 00:42:32.399 "max_latency_us": 8699.975545851528 00:42:32.399 } 00:42:32.399 ], 00:42:32.399 "core_count": 1 00:42:32.399 } 00:42:32.399 14:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:42:32.399 14:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:42:32.399 14:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:42:32.399 | .driver_specific 00:42:32.399 | .nvme_error 00:42:32.399 | .status_code 00:42:32.399 | .command_transient_transport_error' 00:42:32.399 14:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:42:32.659 14:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 485 > 0 )) 00:42:32.659 14:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80407 00:42:32.659 14:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80407 ']' 00:42:32.659 14:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80407 00:42:32.659 14:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:42:32.659 14:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:32.659 14:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80407 00:42:32.659 14:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:42:32.659 14:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:42:32.659 14:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80407' 00:42:32.659 killing process with pid 80407 00:42:32.659 14:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80407 00:42:32.659 Received shutdown signal, test time was about 2.000000 seconds 00:42:32.659 00:42:32.659 Latency(us) 00:42:32.659 [2024-11-20T14:03:29.982Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:32.659 [2024-11-20T14:03:29.982Z] =================================================================================================================== 00:42:32.659 [2024-11-20T14:03:29.982Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:32.659 14:03:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80407 00:42:32.919 14:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:42:32.919 14:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:42:32.919 14:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:42:32.919 14:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:42:32.919 14:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:42:32.919 14:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80467 00:42:32.919 14:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:42:32.919 14:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80467 /var/tmp/bperf.sock 00:42:32.919 14:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80467 ']' 00:42:32.919 14:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:32.919 14:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:32.919 14:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:32.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:32.919 14:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:32.919 14:03:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:42:33.179 [2024-11-20 14:03:30.253188] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:42:33.179 [2024-11-20 14:03:30.253405] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80467 ] 00:42:33.179 [2024-11-20 14:03:30.404675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:33.179 [2024-11-20 14:03:30.488714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:33.439 [2024-11-20 14:03:30.568599] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:42:34.008 14:03:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:34.008 14:03:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:42:34.008 14:03:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:42:34.008 14:03:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:42:34.268 14:03:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:42:34.268 14:03:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:34.268 14:03:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:42:34.268 14:03:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:34.268 14:03:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:42:34.268 14:03:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:42:34.528 nvme0n1 00:42:34.528 14:03:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:42:34.528 14:03:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:34.528 14:03:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:42:34.528 14:03:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:34.528 14:03:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:42:34.528 14:03:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:34.787 Running I/O for 2 seconds... 00:42:34.787 [2024-11-20 14:03:31.865641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016efb048 00:42:34.787 [2024-11-20 14:03:31.866902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:4531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:34.787 [2024-11-20 14:03:31.866982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:34.787 [2024-11-20 14:03:31.879562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016efb8b8 00:42:34.788 [2024-11-20 14:03:31.881037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:34.788 [2024-11-20 14:03:31.881081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:34.788 [2024-11-20 14:03:31.893392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016efc128 00:42:34.788 [2024-11-20 14:03:31.894573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:34.788 [2024-11-20 14:03:31.894684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:42:34.788 [2024-11-20 14:03:31.906685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016efc998 00:42:34.788 [2024-11-20 14:03:31.907923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:34.788 [2024-11-20 14:03:31.907971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:42:34.788 [2024-11-20 14:03:31.919731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016efd208 00:42:34.788 [2024-11-20 14:03:31.920965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:34.788 [2024-11-20 14:03:31.921010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:34.788 [2024-11-20 14:03:31.933012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016efda78 00:42:34.788 [2024-11-20 14:03:31.934112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:34.788 [2024-11-20 14:03:31.934208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:42:34.788 [2024-11-20 14:03:31.946054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016efe2e8 00:42:34.788 [2024-11-20 14:03:31.947279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:34.788 [2024-11-20 14:03:31.947325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:42:34.788 [2024-11-20 14:03:31.959636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016efeb58 00:42:34.788 [2024-11-20 14:03:31.960844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:34.788 [2024-11-20 14:03:31.960943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:42:34.788 [2024-11-20 14:03:31.979012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016efef90 00:42:34.788 [2024-11-20 14:03:31.981306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:34.788 [2024-11-20 14:03:31.981351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:42:34.788 [2024-11-20 14:03:31.993496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016efeb58 00:42:34.788 [2024-11-20 14:03:31.995764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:34.788 [2024-11-20 14:03:31.995876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:42:34.788 [2024-11-20 14:03:32.007406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016efe2e8 00:42:34.788 [2024-11-20 14:03:32.009705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:34.788 [2024-11-20 14:03:32.009751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:42:34.788 [2024-11-20 14:03:32.021260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016efda78 00:42:34.788 [2024-11-20 14:03:32.023332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:34.788 [2024-11-20 14:03:32.023434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:42:34.788 [2024-11-20 14:03:32.034870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016efd208 00:42:34.788 [2024-11-20 14:03:32.037036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:34.788 [2024-11-20 14:03:32.037081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:42:34.788 [2024-11-20 14:03:32.048474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016efc998 00:42:34.788 [2024-11-20 14:03:32.050677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:34.788 [2024-11-20 14:03:32.050789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:42:34.788 [2024-11-20 14:03:32.062351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016efc128 00:42:34.788 [2024-11-20 14:03:32.064594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:9626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:34.788 [2024-11-20 14:03:32.064695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:42:34.788 [2024-11-20 14:03:32.076181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016efb8b8 00:42:34.788 [2024-11-20 14:03:32.078299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:23925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:34.788 [2024-11-20 14:03:32.078400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:42:34.788 [2024-11-20 14:03:32.089892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016efb048 00:42:34.788 [2024-11-20 14:03:32.091965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:12758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:34.788 [2024-11-20 14:03:32.092073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:34.788 [2024-11-20 14:03:32.104243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016efa7d8 00:42:34.788 [2024-11-20 14:03:32.106469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:34.788 [2024-11-20 14:03:32.106582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:42:35.048 [2024-11-20 14:03:32.119783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ef9f68 00:42:35.048 [2024-11-20 14:03:32.122175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:2431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.048 [2024-11-20 14:03:32.122298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:42:35.048 [2024-11-20 14:03:32.135152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ef96f8 00:42:35.048 [2024-11-20 14:03:32.137422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:18831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.048 [2024-11-20 14:03:32.137536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:42:35.048 [2024-11-20 14:03:32.149595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ef8e88 00:42:35.048 [2024-11-20 14:03:32.151753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.048 [2024-11-20 14:03:32.151874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:42:35.048 [2024-11-20 14:03:32.163675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ef8618 00:42:35.048 [2024-11-20 14:03:32.165832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:10502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.048 [2024-11-20 14:03:32.165940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:42:35.048 [2024-11-20 14:03:32.178369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ef7da8 00:42:35.048 [2024-11-20 14:03:32.180616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:10055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.048 [2024-11-20 14:03:32.180751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:42:35.048 [2024-11-20 14:03:32.193510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ef7538 00:42:35.048 [2024-11-20 14:03:32.195674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.048 [2024-11-20 14:03:32.195830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:42:35.048 [2024-11-20 14:03:32.207592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ef6cc8 00:42:35.048 [2024-11-20 14:03:32.209758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:20643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.048 [2024-11-20 14:03:32.209892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:42:35.048 [2024-11-20 14:03:32.221731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ef6458 00:42:35.048 [2024-11-20 14:03:32.223726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:19266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.048 [2024-11-20 14:03:32.223845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:42:35.048 [2024-11-20 14:03:32.235532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ef5be8 00:42:35.048 [2024-11-20 14:03:32.237583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:11210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.048 [2024-11-20 14:03:32.237696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:42:35.048 [2024-11-20 14:03:32.249800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ef5378 00:42:35.049 [2024-11-20 14:03:32.251907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:19705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.049 [2024-11-20 14:03:32.252024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:42:35.049 [2024-11-20 14:03:32.263909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ef4b08 00:42:35.049 [2024-11-20 14:03:32.265904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:1915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.049 [2024-11-20 14:03:32.266016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:42:35.049 [2024-11-20 14:03:32.278273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ef4298 00:42:35.049 [2024-11-20 14:03:32.280429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.049 [2024-11-20 14:03:32.280546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:42:35.049 [2024-11-20 14:03:32.293553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ef3a28 00:42:35.049 [2024-11-20 14:03:32.295675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.049 [2024-11-20 14:03:32.295809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:42:35.049 [2024-11-20 14:03:32.307399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ef31b8 00:42:35.049 [2024-11-20 14:03:32.309259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.049 [2024-11-20 14:03:32.309364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:42:35.049 [2024-11-20 14:03:32.320885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ef2948 00:42:35.049 [2024-11-20 14:03:32.322765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.049 [2024-11-20 14:03:32.322866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:42:35.049 [2024-11-20 14:03:32.334402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ef20d8 00:42:35.049 [2024-11-20 14:03:32.336311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:17062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.049 [2024-11-20 14:03:32.336415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:42:35.049 [2024-11-20 14:03:32.347840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ef1868 00:42:35.049 [2024-11-20 14:03:32.349568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.049 [2024-11-20 14:03:32.349668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:42:35.049 [2024-11-20 14:03:32.361207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ef0ff8 00:42:35.049 [2024-11-20 14:03:32.363004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.049 [2024-11-20 14:03:32.363097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:42:35.309 [2024-11-20 14:03:32.374593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ef0788 00:42:35.309 [2024-11-20 14:03:32.376316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:10751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.310 [2024-11-20 14:03:32.376358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:42:35.310 [2024-11-20 14:03:32.387745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016eeff18 00:42:35.310 [2024-11-20 14:03:32.389424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:19775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.310 [2024-11-20 14:03:32.389513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:42:35.310 [2024-11-20 14:03:32.401079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016eef6a8 00:42:35.310 [2024-11-20 14:03:32.402900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.310 [2024-11-20 14:03:32.403004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:42:35.310 [2024-11-20 14:03:32.414479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016eeee38 00:42:35.310 [2024-11-20 14:03:32.416303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.310 [2024-11-20 14:03:32.416404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:42:35.310 [2024-11-20 14:03:32.427762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016eee5c8 00:42:35.310 [2024-11-20 14:03:32.429399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:6146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.310 [2024-11-20 14:03:32.429496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:42:35.310 [2024-11-20 14:03:32.441110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016eedd58 00:42:35.310 [2024-11-20 14:03:32.442837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.310 [2024-11-20 14:03:32.442943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:42:35.310 [2024-11-20 14:03:32.454443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016eed4e8 00:42:35.310 [2024-11-20 14:03:32.456239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:24306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.310 [2024-11-20 14:03:32.456338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:42:35.310 [2024-11-20 14:03:32.467748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016eecc78 00:42:35.310 [2024-11-20 14:03:32.469366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.310 [2024-11-20 14:03:32.469501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:42:35.310 [2024-11-20 14:03:32.481588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016eec408 00:42:35.310 [2024-11-20 14:03:32.483392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.310 [2024-11-20 14:03:32.483501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:42:35.310 [2024-11-20 14:03:32.495476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016eebb98 00:42:35.310 [2024-11-20 14:03:32.497150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:6650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.310 [2024-11-20 14:03:32.497262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:42:35.310 [2024-11-20 14:03:32.509449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016eeb328 00:42:35.310 [2024-11-20 14:03:32.511221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.310 [2024-11-20 14:03:32.511335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:42:35.310 [2024-11-20 14:03:32.523549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016eeaab8 00:42:35.310 [2024-11-20 14:03:32.525143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:22015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.310 [2024-11-20 14:03:32.525279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:42:35.310 [2024-11-20 14:03:32.537533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016eea248 00:42:35.310 [2024-11-20 14:03:32.539281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.310 [2024-11-20 14:03:32.539408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:35.310 [2024-11-20 14:03:32.552165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ee99d8 00:42:35.310 [2024-11-20 14:03:32.553714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:8108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.310 [2024-11-20 14:03:32.553821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:42:35.310 [2024-11-20 14:03:32.566056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ee9168 00:42:35.310 [2024-11-20 14:03:32.567737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.310 [2024-11-20 14:03:32.567851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:42:35.310 [2024-11-20 14:03:32.581056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ee88f8 00:42:35.310 [2024-11-20 14:03:32.582770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:21546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.310 [2024-11-20 14:03:32.582880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:42:35.310 [2024-11-20 14:03:32.596010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ee8088 00:42:35.310 [2024-11-20 14:03:32.597619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:15369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.310 [2024-11-20 14:03:32.597769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:42:35.310 [2024-11-20 14:03:32.609786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ee7818 00:42:35.310 [2024-11-20 14:03:32.611249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.310 [2024-11-20 14:03:32.611360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:42:35.310 [2024-11-20 14:03:32.623226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ee6fa8 00:42:35.310 [2024-11-20 14:03:32.624725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:9080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.310 [2024-11-20 14:03:32.624837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:42:35.571 [2024-11-20 14:03:32.636597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ee6738 00:42:35.571 [2024-11-20 14:03:32.637985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:47 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.571 [2024-11-20 14:03:32.638085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:42:35.571 [2024-11-20 14:03:32.649873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ee5ec8 00:42:35.571 [2024-11-20 14:03:32.651295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.571 [2024-11-20 14:03:32.651339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:42:35.571 [2024-11-20 14:03:32.663447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ee5658 00:42:35.571 [2024-11-20 14:03:32.664768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.571 [2024-11-20 14:03:32.664860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:42:35.571 [2024-11-20 14:03:32.676567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ee4de8 00:42:35.571 [2024-11-20 14:03:32.677955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:19265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.571 [2024-11-20 14:03:32.678000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:42:35.571 [2024-11-20 14:03:32.689904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ee4578 00:42:35.571 [2024-11-20 14:03:32.691203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.571 [2024-11-20 14:03:32.691250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:42:35.571 [2024-11-20 14:03:32.703018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ee3d08 00:42:35.571 [2024-11-20 14:03:32.704335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.571 [2024-11-20 14:03:32.704435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:42:35.571 [2024-11-20 14:03:32.716369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ee3498 00:42:35.571 [2024-11-20 14:03:32.717657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:15323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.571 [2024-11-20 14:03:32.717692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:42:35.571 [2024-11-20 14:03:32.729504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ee2c28 00:42:35.571 [2024-11-20 14:03:32.730703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:11759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.571 [2024-11-20 14:03:32.730752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:42:35.571 [2024-11-20 14:03:32.742662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ee23b8 00:42:35.571 [2024-11-20 14:03:32.744007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.571 [2024-11-20 14:03:32.744052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:42:35.571 [2024-11-20 14:03:32.756147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ee1b48 00:42:35.571 [2024-11-20 14:03:32.757388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.571 [2024-11-20 14:03:32.757431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:42:35.571 [2024-11-20 14:03:32.769406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ee12d8 00:42:35.572 [2024-11-20 14:03:32.770583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.572 [2024-11-20 14:03:32.770680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:42:35.572 [2024-11-20 14:03:32.782867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ee0a68 00:42:35.572 [2024-11-20 14:03:32.784109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.572 [2024-11-20 14:03:32.784151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:42:35.572 [2024-11-20 14:03:32.795976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ee01f8 00:42:35.572 [2024-11-20 14:03:32.797146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.572 [2024-11-20 14:03:32.797253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:42:35.572 [2024-11-20 14:03:32.809430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016edf988 00:42:35.572 [2024-11-20 14:03:32.810548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.572 [2024-11-20 14:03:32.810590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:42:35.572 [2024-11-20 14:03:32.822623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016edf118 00:42:35.572 [2024-11-20 14:03:32.823883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.572 [2024-11-20 14:03:32.823995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:42:35.572 [2024-11-20 14:03:32.836494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ede8a8 00:42:35.572 [2024-11-20 14:03:32.837813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.572 [2024-11-20 14:03:32.837854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:42:35.572 18218.00 IOPS, 71.16 MiB/s [2024-11-20T14:03:32.895Z] [2024-11-20 14:03:32.849932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ede038 00:42:35.572 [2024-11-20 14:03:32.851002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.572 [2024-11-20 14:03:32.851046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:42:35.572 [2024-11-20 14:03:32.868665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ede038 00:42:35.572 [2024-11-20 14:03:32.870861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.572 [2024-11-20 14:03:32.870910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:42:35.572 [2024-11-20 14:03:32.882402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ede8a8 00:42:35.572 [2024-11-20 14:03:32.884777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.572 [2024-11-20 14:03:32.884834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:42:35.833 [2024-11-20 14:03:32.896765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016edf118 00:42:35.833 [2024-11-20 14:03:32.898906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.833 [2024-11-20 14:03:32.899038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:42:35.833 [2024-11-20 14:03:32.911507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016edf988 00:42:35.833 [2024-11-20 14:03:32.913810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.833 [2024-11-20 14:03:32.913860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:42:35.833 [2024-11-20 14:03:32.926457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ee01f8 00:42:35.833 [2024-11-20 14:03:32.928663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.833 [2024-11-20 14:03:32.928773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:42:35.833 [2024-11-20 14:03:32.940674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ee0a68 00:42:35.833 [2024-11-20 14:03:32.942899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.833 [2024-11-20 14:03:32.942951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:42:35.833 [2024-11-20 14:03:32.954818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ee12d8 00:42:35.833 [2024-11-20 14:03:32.956951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:17161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.833 [2024-11-20 14:03:32.956997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:42:35.833 [2024-11-20 14:03:32.968504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ee1b48 00:42:35.833 [2024-11-20 14:03:32.970729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.833 [2024-11-20 14:03:32.970772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:35.833 [2024-11-20 14:03:32.982813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ee23b8 00:42:35.833 [2024-11-20 14:03:32.985026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:24620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.833 [2024-11-20 14:03:32.985075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:42:35.833 [2024-11-20 14:03:32.996909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ee2c28 00:42:35.833 [2024-11-20 14:03:32.998938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:20538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.833 [2024-11-20 14:03:32.999062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:42:35.833 [2024-11-20 14:03:33.010548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ee3498 00:42:35.833 [2024-11-20 14:03:33.012693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:18666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.833 [2024-11-20 14:03:33.012738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:42:35.833 [2024-11-20 14:03:33.024740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ee3d08 00:42:35.833 [2024-11-20 14:03:33.026888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:4279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.833 [2024-11-20 14:03:33.026937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:42:35.833 [2024-11-20 14:03:33.038549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ee4578 00:42:35.833 [2024-11-20 14:03:33.040593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.833 [2024-11-20 14:03:33.040638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:42:35.833 [2024-11-20 14:03:33.052456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ee4de8 00:42:35.833 [2024-11-20 14:03:33.054479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:4924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.833 [2024-11-20 14:03:33.054580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:42:35.833 [2024-11-20 14:03:33.066503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ee5658 00:42:35.833 [2024-11-20 14:03:33.068540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:9444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.833 [2024-11-20 14:03:33.068582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:42:35.833 [2024-11-20 14:03:33.080388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ee5ec8 00:42:35.833 [2024-11-20 14:03:33.082395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:8360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.833 [2024-11-20 14:03:33.082436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:42:35.833 [2024-11-20 14:03:33.094659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ee6738 00:42:35.833 [2024-11-20 14:03:33.096729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.833 [2024-11-20 14:03:33.096770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:42:35.833 [2024-11-20 14:03:33.108261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ee6fa8 00:42:35.833 [2024-11-20 14:03:33.110202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:1932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.833 [2024-11-20 14:03:33.110244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:42:35.833 [2024-11-20 14:03:33.122067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ee7818 00:42:35.833 [2024-11-20 14:03:33.124177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:25298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.833 [2024-11-20 14:03:33.124227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:42:35.833 [2024-11-20 14:03:33.137167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ee8088 00:42:35.833 [2024-11-20 14:03:33.139419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:35.833 [2024-11-20 14:03:33.139471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:42:35.833 [2024-11-20 14:03:33.152508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ee88f8 00:42:36.094 [2024-11-20 14:03:33.154587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:10526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:36.094 [2024-11-20 14:03:33.154632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:42:36.094 [2024-11-20 14:03:33.167542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ee9168 00:42:36.094 [2024-11-20 14:03:33.169628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:11480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:36.094 [2024-11-20 14:03:33.169670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:42:36.094 [2024-11-20 14:03:33.181678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ee99d8 00:42:36.095 [2024-11-20 14:03:33.183619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:11970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:36.095 [2024-11-20 14:03:33.183657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:42:36.095 [2024-11-20 14:03:33.195349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016eea248 00:42:36.095 [2024-11-20 14:03:33.197244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:17954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:36.095 [2024-11-20 14:03:33.197349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:36.095 [2024-11-20 14:03:33.209456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016eeaab8 00:42:36.095 [2024-11-20 14:03:33.211362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:36.095 [2024-11-20 14:03:33.211409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:42:36.095 [2024-11-20 14:03:33.223331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016eeb328 00:42:36.095 [2024-11-20 14:03:33.225153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:18306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:36.095 [2024-11-20 14:03:33.225213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:42:36.095 [2024-11-20 14:03:33.237399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016eebb98 00:42:36.095 [2024-11-20 14:03:33.239250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:36.095 [2024-11-20 14:03:33.239300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:42:36.095 [2024-11-20 14:03:33.251417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016eec408 00:42:36.095 [2024-11-20 14:03:33.253101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:36.095 [2024-11-20 14:03:33.253144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:42:36.095 [2024-11-20 14:03:33.265267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016eecc78 00:42:36.095 [2024-11-20 14:03:33.267029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:3607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:36.095 [2024-11-20 14:03:33.267153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:42:36.095 [2024-11-20 14:03:33.279661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016eed4e8 00:42:36.095 [2024-11-20 14:03:33.281490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:36.095 [2024-11-20 14:03:33.281539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:42:36.095 [2024-11-20 14:03:33.293855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016eedd58 00:42:36.095 [2024-11-20 14:03:33.295820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:36.095 [2024-11-20 14:03:33.295866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:42:36.095 [2024-11-20 14:03:33.308341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016eee5c8 00:42:36.095 [2024-11-20 14:03:33.309965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:36.095 [2024-11-20 14:03:33.310075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:42:36.095 [2024-11-20 14:03:33.322216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016eeee38 00:42:36.095 [2024-11-20 14:03:33.323946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:36.095 [2024-11-20 14:03:33.324061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:42:36.095 [2024-11-20 14:03:33.336460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016eef6a8 00:42:36.095 [2024-11-20 14:03:33.338298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:5592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:36.095 [2024-11-20 14:03:33.338343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:42:36.095 [2024-11-20 14:03:33.350476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016eeff18 00:42:36.095 [2024-11-20 14:03:33.352185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:36.095 [2024-11-20 14:03:33.352237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:42:36.095 [2024-11-20 14:03:33.364219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ef0788 00:42:36.095 [2024-11-20 14:03:33.365703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:24015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:36.095 [2024-11-20 14:03:33.365753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:42:36.095 [2024-11-20 14:03:33.377443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ef0ff8 00:42:36.095 [2024-11-20 14:03:33.379127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:16253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:36.095 [2024-11-20 14:03:33.379171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:42:36.095 [2024-11-20 14:03:33.390858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ef1868 00:42:36.095 [2024-11-20 14:03:33.392401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:25524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:36.095 [2024-11-20 14:03:33.392445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:42:36.095 [2024-11-20 14:03:33.404392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ef20d8 00:42:36.095 [2024-11-20 14:03:33.406111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:36.095 [2024-11-20 14:03:33.406150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:42:36.356 [2024-11-20 14:03:33.417817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ef2948 00:42:36.356 [2024-11-20 14:03:33.419317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:9805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:36.356 [2024-11-20 14:03:33.419362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:36.356 [2024-11-20 14:03:33.430797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ef31b8 00:42:36.356 [2024-11-20 14:03:33.432315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:36.356 [2024-11-20 14:03:33.432357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:42:36.356 [2024-11-20 14:03:33.443754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ef3a28 00:42:36.356 [2024-11-20 14:03:33.445164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:36.356 [2024-11-20 14:03:33.445207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:42:36.356 [2024-11-20 14:03:33.456766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ef4298 00:42:36.356 [2024-11-20 14:03:33.458371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:36.356 [2024-11-20 14:03:33.458412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:42:36.356 [2024-11-20 14:03:33.470235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ef4b08 00:42:36.356 [2024-11-20 14:03:33.471722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:36.356 [2024-11-20 14:03:33.471766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:42:36.356 [2024-11-20 14:03:33.483610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ef5378 00:42:36.356 [2024-11-20 14:03:33.485068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:14764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:36.356 [2024-11-20 14:03:33.485175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:42:36.356 [2024-11-20 14:03:33.496751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ef5be8 00:42:36.356 [2024-11-20 14:03:33.498082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:22122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:36.356 [2024-11-20 14:03:33.498124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:42:36.356 [2024-11-20 14:03:33.509626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ef6458 00:42:36.356 [2024-11-20 14:03:33.510991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:36.356 [2024-11-20 14:03:33.511047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:42:36.356 [2024-11-20 14:03:33.522562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ef6cc8 00:42:36.356 [2024-11-20 14:03:33.524108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:8182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:36.356 [2024-11-20 14:03:33.524152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:42:36.356 [2024-11-20 14:03:33.535630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ef7538 00:42:36.356 [2024-11-20 14:03:33.536992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:12445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:36.356 [2024-11-20 14:03:33.537035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:42:36.356 [2024-11-20 14:03:33.548652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ef7da8 00:42:36.356 [2024-11-20 14:03:33.550051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:36.356 [2024-11-20 14:03:33.550095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:42:36.356 [2024-11-20 14:03:33.561829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ef8618 00:42:36.356 [2024-11-20 14:03:33.563139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:18815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:36.356 [2024-11-20 14:03:33.563251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:42:36.356 [2024-11-20 14:03:33.574878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ef8e88 00:42:36.356 [2024-11-20 14:03:33.576179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:24992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:36.356 [2024-11-20 14:03:33.576234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:42:36.356 [2024-11-20 14:03:33.587942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ef96f8 00:42:36.356 [2024-11-20 14:03:33.589396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:25599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:36.356 [2024-11-20 14:03:33.589438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:42:36.356 [2024-11-20 14:03:33.601607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016ef9f68 00:42:36.356 [2024-11-20 14:03:33.602996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:6136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:36.356 [2024-11-20 14:03:33.603040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:42:36.356 [2024-11-20 14:03:33.616232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016efa7d8 00:42:36.356 [2024-11-20 14:03:33.617581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:36.356 [2024-11-20 14:03:33.617685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:42:36.356 [2024-11-20 14:03:33.630553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016efb048 00:42:36.356 [2024-11-20 14:03:33.631864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:36.356 [2024-11-20 14:03:33.631910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:36.356 [2024-11-20 14:03:33.643761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016efb8b8 00:42:36.356 [2024-11-20 14:03:33.645045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:24246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:36.356 [2024-11-20 14:03:33.645151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:36.356 [2024-11-20 14:03:33.657007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016efc128 00:42:36.356 [2024-11-20 14:03:33.658184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:6516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:36.356 [2024-11-20 14:03:33.658230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:42:36.356 [2024-11-20 14:03:33.670088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016efc998 00:42:36.356 [2024-11-20 14:03:33.671261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:36.356 [2024-11-20 14:03:33.671310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:42:36.638 [2024-11-20 14:03:33.683510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016efd208 00:42:36.638 [2024-11-20 14:03:33.684883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:36.638 [2024-11-20 14:03:33.684923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:42:36.638 [2024-11-20 14:03:33.696919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016efda78 00:42:36.638 [2024-11-20 14:03:33.698022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:36.638 [2024-11-20 14:03:33.698067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:42:36.638 [2024-11-20 14:03:33.709920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016efe2e8 00:42:36.638 [2024-11-20 14:03:33.711123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:36.638 [2024-11-20 14:03:33.711168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:42:36.638 [2024-11-20 14:03:33.723050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016efeb58 00:42:36.638 [2024-11-20 14:03:33.724175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:36.638 [2024-11-20 14:03:33.724220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:42:36.638 [2024-11-20 14:03:33.741560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016efef90 00:42:36.638 [2024-11-20 14:03:33.743885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:91 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:36.638 [2024-11-20 14:03:33.743933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:42:36.638 [2024-11-20 14:03:33.755022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016efeb58 00:42:36.638 [2024-11-20 14:03:33.757244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:36.638 [2024-11-20 14:03:33.757287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:42:36.638 [2024-11-20 14:03:33.768282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016efe2e8 00:42:36.638 [2024-11-20 14:03:33.770457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:36.638 [2024-11-20 14:03:33.770496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:42:36.638 [2024-11-20 14:03:33.781610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016efda78 00:42:36.638 [2024-11-20 14:03:33.783744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:36.638 [2024-11-20 14:03:33.783788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:42:36.638 [2024-11-20 14:03:33.795052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016efd208 00:42:36.638 [2024-11-20 14:03:33.797378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:36.638 [2024-11-20 14:03:33.797420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:42:36.638 [2024-11-20 14:03:33.808667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016efc998 00:42:36.638 [2024-11-20 14:03:33.810890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:36.638 [2024-11-20 14:03:33.811010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:42:36.638 [2024-11-20 14:03:33.822416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016efc128 00:42:36.638 [2024-11-20 14:03:33.824602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:9093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:36.638 [2024-11-20 14:03:33.824644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:42:36.638 [2024-11-20 14:03:33.835847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016efb8b8 00:42:36.638 [2024-11-20 14:03:33.837877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:20069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:36.638 [2024-11-20 14:03:33.837979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:42:36.638 18344.00 IOPS, 71.66 MiB/s [2024-11-20T14:03:33.961Z] [2024-11-20 14:03:33.850765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x154cae0) with pdu=0x200016efb048 00:42:36.638 [2024-11-20 14:03:33.852888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:42:36.638 [2024-11-20 14:03:33.852932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:42:36.638 00:42:36.638 Latency(us) 00:42:36.638 [2024-11-20T14:03:33.961Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:36.638 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:42:36.638 nvme0n1 : 2.01 18337.27 71.63 0.00 0.00 6973.87 5580.58 26901.24 00:42:36.638 [2024-11-20T14:03:33.961Z] =================================================================================================================== 00:42:36.638 [2024-11-20T14:03:33.961Z] Total : 18337.27 71.63 0.00 0.00 6973.87 5580.58 26901.24 00:42:36.638 { 00:42:36.638 "results": [ 00:42:36.638 { 00:42:36.638 "job": "nvme0n1", 00:42:36.638 "core_mask": "0x2", 00:42:36.638 "workload": "randwrite", 00:42:36.639 "status": "finished", 00:42:36.639 "queue_depth": 128, 00:42:36.639 "io_size": 4096, 00:42:36.639 "runtime": 2.007714, 00:42:36.639 "iops": 18337.27313750863, 00:42:36.639 "mibps": 71.62997319339308, 00:42:36.639 "io_failed": 0, 00:42:36.639 "io_timeout": 0, 00:42:36.639 "avg_latency_us": 6973.873577654675, 00:42:36.639 "min_latency_us": 5580.576419213974, 00:42:36.639 "max_latency_us": 26901.24017467249 00:42:36.639 } 00:42:36.639 ], 00:42:36.639 "core_count": 1 00:42:36.639 } 00:42:36.639 14:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:42:36.639 14:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:42:36.639 14:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:42:36.639 14:03:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:42:36.639 | .driver_specific 00:42:36.639 | .nvme_error 00:42:36.639 | .status_code 00:42:36.639 | .command_transient_transport_error' 00:42:36.908 14:03:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 144 > 0 )) 00:42:36.908 14:03:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80467 00:42:36.908 14:03:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80467 ']' 00:42:36.908 14:03:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80467 00:42:36.908 14:03:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:42:36.908 14:03:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:36.908 14:03:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80467 00:42:36.908 14:03:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:42:36.908 14:03:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:42:36.908 14:03:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80467' 00:42:36.908 killing process with pid 80467 00:42:36.908 14:03:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80467 00:42:36.908 Received shutdown signal, test time was about 2.000000 seconds 00:42:36.908 00:42:36.908 Latency(us) 00:42:36.908 [2024-11-20T14:03:34.231Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:36.908 [2024-11-20T14:03:34.231Z] =================================================================================================================== 00:42:36.908 [2024-11-20T14:03:34.231Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:36.908 14:03:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80467 00:42:37.168 14:03:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:42:37.168 14:03:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:42:37.168 14:03:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:42:37.168 14:03:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:42:37.168 14:03:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:42:37.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:37.168 14:03:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80527 00:42:37.168 14:03:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80527 /var/tmp/bperf.sock 00:42:37.168 14:03:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80527 ']' 00:42:37.168 14:03:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:37.168 14:03:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:37.168 14:03:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:42:37.168 14:03:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:37.168 14:03:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:37.168 14:03:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:42:37.427 [2024-11-20 14:03:34.521681] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:42:37.427 [2024-11-20 14:03:34.521879] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6I/O size of 131072 is greater than zero copy threshold (65536). 00:42:37.427 Zero copy mechanism will not be used. 00:42:37.427 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80527 ] 00:42:37.427 [2024-11-20 14:03:34.656811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:37.427 [2024-11-20 14:03:34.739495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:37.686 [2024-11-20 14:03:34.819597] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:42:38.252 14:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:38.252 14:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:42:38.252 14:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:42:38.252 14:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:42:38.511 14:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:42:38.511 14:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:38.511 14:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:42:38.511 14:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:38.511 14:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:42:38.511 14:03:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:42:38.770 nvme0n1 00:42:38.770 14:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:42:38.770 14:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:38.770 14:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:42:38.770 14:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:38.770 14:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:42:38.770 14:03:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:39.031 I/O size of 131072 is greater than zero copy threshold (65536). 00:42:39.031 Zero copy mechanism will not be used. 00:42:39.031 Running I/O for 2 seconds... 00:42:39.031 [2024-11-20 14:03:36.138459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.031 [2024-11-20 14:03:36.138636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.031 [2024-11-20 14:03:36.138673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:39.031 [2024-11-20 14:03:36.142888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.031 [2024-11-20 14:03:36.143315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.031 [2024-11-20 14:03:36.143455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:39.031 [2024-11-20 14:03:36.146971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.031 [2024-11-20 14:03:36.147047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.031 [2024-11-20 14:03:36.147076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:39.031 [2024-11-20 14:03:36.150902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.031 [2024-11-20 14:03:36.151070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.032 [2024-11-20 14:03:36.151094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:39.032 [2024-11-20 14:03:36.154943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.032 [2024-11-20 14:03:36.155053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.032 [2024-11-20 14:03:36.155077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:39.032 [2024-11-20 14:03:36.158994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.032 [2024-11-20 14:03:36.159163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.032 [2024-11-20 14:03:36.159186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:39.032 [2024-11-20 14:03:36.163088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.032 [2024-11-20 14:03:36.163210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.032 [2024-11-20 14:03:36.163235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:39.032 [2024-11-20 14:03:36.167210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.032 [2024-11-20 14:03:36.167467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.032 [2024-11-20 14:03:36.167491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:39.032 [2024-11-20 14:03:36.170850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.032 [2024-11-20 14:03:36.171271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.032 [2024-11-20 14:03:36.171302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:39.032 [2024-11-20 14:03:36.174842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.032 [2024-11-20 14:03:36.175018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.032 [2024-11-20 14:03:36.175042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:39.032 [2024-11-20 14:03:36.178945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.032 [2024-11-20 14:03:36.179038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.032 [2024-11-20 14:03:36.179062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:39.032 [2024-11-20 14:03:36.182860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.032 [2024-11-20 14:03:36.182989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.032 [2024-11-20 14:03:36.183012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:39.032 [2024-11-20 14:03:36.186898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.032 [2024-11-20 14:03:36.187001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.032 [2024-11-20 14:03:36.187024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:39.032 [2024-11-20 14:03:36.190444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.032 [2024-11-20 14:03:36.191006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.032 [2024-11-20 14:03:36.191043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:39.032 [2024-11-20 14:03:36.194551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.032 [2024-11-20 14:03:36.194625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.032 [2024-11-20 14:03:36.194648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:39.032 [2024-11-20 14:03:36.198720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.032 [2024-11-20 14:03:36.198845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.032 [2024-11-20 14:03:36.198869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:39.032 [2024-11-20 14:03:36.202780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.032 [2024-11-20 14:03:36.202871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.032 [2024-11-20 14:03:36.202895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:39.032 [2024-11-20 14:03:36.206337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.032 [2024-11-20 14:03:36.206635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.032 [2024-11-20 14:03:36.206668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:39.032 [2024-11-20 14:03:36.210560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.032 [2024-11-20 14:03:36.210746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.032 [2024-11-20 14:03:36.210772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:39.032 [2024-11-20 14:03:36.215120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.032 [2024-11-20 14:03:36.215198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.032 [2024-11-20 14:03:36.215223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:39.032 [2024-11-20 14:03:36.219443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.032 [2024-11-20 14:03:36.219541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.032 [2024-11-20 14:03:36.219565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:39.032 [2024-11-20 14:03:36.223526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.032 [2024-11-20 14:03:36.223820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.032 [2024-11-20 14:03:36.223844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:39.032 [2024-11-20 14:03:36.227392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.032 [2024-11-20 14:03:36.227759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.032 [2024-11-20 14:03:36.227789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:39.032 [2024-11-20 14:03:36.231366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.032 [2024-11-20 14:03:36.231498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.032 [2024-11-20 14:03:36.231521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:39.032 [2024-11-20 14:03:36.235437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.032 [2024-11-20 14:03:36.235507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.032 [2024-11-20 14:03:36.235531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:39.032 [2024-11-20 14:03:36.239500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.032 [2024-11-20 14:03:36.239663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.032 [2024-11-20 14:03:36.239686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:39.032 [2024-11-20 14:03:36.243545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.032 [2024-11-20 14:03:36.243657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.032 [2024-11-20 14:03:36.243678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:39.032 [2024-11-20 14:03:36.246888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.032 [2024-11-20 14:03:36.247036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.032 [2024-11-20 14:03:36.247058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:39.032 [2024-11-20 14:03:36.250856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.032 [2024-11-20 14:03:36.250985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.032 [2024-11-20 14:03:36.251012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:39.032 [2024-11-20 14:03:36.255111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.032 [2024-11-20 14:03:36.255275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.032 [2024-11-20 14:03:36.255299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:39.032 [2024-11-20 14:03:36.259326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.033 [2024-11-20 14:03:36.259498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.033 [2024-11-20 14:03:36.259522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:39.033 [2024-11-20 14:03:36.263431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.033 [2024-11-20 14:03:36.263561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.033 [2024-11-20 14:03:36.263586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:39.033 [2024-11-20 14:03:36.267542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.033 [2024-11-20 14:03:36.267716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.033 [2024-11-20 14:03:36.267754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:39.033 [2024-11-20 14:03:36.271118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.033 [2024-11-20 14:03:36.271522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.033 [2024-11-20 14:03:36.271553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:39.033 [2024-11-20 14:03:36.275157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.033 [2024-11-20 14:03:36.275342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.033 [2024-11-20 14:03:36.275366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:39.033 [2024-11-20 14:03:36.279106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.033 [2024-11-20 14:03:36.279195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.033 [2024-11-20 14:03:36.279219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:39.033 [2024-11-20 14:03:36.283124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.033 [2024-11-20 14:03:36.283318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.033 [2024-11-20 14:03:36.283340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:39.033 [2024-11-20 14:03:36.286817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.033 [2024-11-20 14:03:36.287183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.033 [2024-11-20 14:03:36.287212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:39.033 [2024-11-20 14:03:36.290580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.033 [2024-11-20 14:03:36.290733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.033 [2024-11-20 14:03:36.290756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:39.033 [2024-11-20 14:03:36.294549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.033 [2024-11-20 14:03:36.294672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.033 [2024-11-20 14:03:36.294695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:39.033 [2024-11-20 14:03:36.298602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.033 [2024-11-20 14:03:36.298808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.033 [2024-11-20 14:03:36.298831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:39.033 [2024-11-20 14:03:36.302563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.033 [2024-11-20 14:03:36.302682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.033 [2024-11-20 14:03:36.302705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:39.033 [2024-11-20 14:03:36.305934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.033 [2024-11-20 14:03:36.306055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.033 [2024-11-20 14:03:36.306078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:39.033 [2024-11-20 14:03:36.309926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.033 [2024-11-20 14:03:36.310098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.033 [2024-11-20 14:03:36.310119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:39.033 [2024-11-20 14:03:36.313990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.033 [2024-11-20 14:03:36.314062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.033 [2024-11-20 14:03:36.314083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:39.033 [2024-11-20 14:03:36.318007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.033 [2024-11-20 14:03:36.318225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.033 [2024-11-20 14:03:36.318248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:39.033 [2024-11-20 14:03:36.321592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.033 [2024-11-20 14:03:36.321988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.033 [2024-11-20 14:03:36.322010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:39.033 [2024-11-20 14:03:36.325504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.033 [2024-11-20 14:03:36.325741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.033 [2024-11-20 14:03:36.325765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:39.033 [2024-11-20 14:03:36.329480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.033 [2024-11-20 14:03:36.329718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.033 [2024-11-20 14:03:36.329751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:39.033 [2024-11-20 14:03:36.333508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.033 [2024-11-20 14:03:36.333735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.033 [2024-11-20 14:03:36.333758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:39.033 [2024-11-20 14:03:36.337074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.033 [2024-11-20 14:03:36.337483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.033 [2024-11-20 14:03:36.337507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:39.033 [2024-11-20 14:03:36.341024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.033 [2024-11-20 14:03:36.341147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.033 [2024-11-20 14:03:36.341169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:39.033 [2024-11-20 14:03:36.344963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.033 [2024-11-20 14:03:36.345106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.033 [2024-11-20 14:03:36.345127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:39.033 [2024-11-20 14:03:36.348908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.033 [2024-11-20 14:03:36.349083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.033 [2024-11-20 14:03:36.349104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:39.295 [2024-11-20 14:03:36.352907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.295 [2024-11-20 14:03:36.353000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.295 [2024-11-20 14:03:36.353021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:39.295 [2024-11-20 14:03:36.356235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.295 [2024-11-20 14:03:36.356370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.295 [2024-11-20 14:03:36.356392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:39.295 [2024-11-20 14:03:36.360345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.295 [2024-11-20 14:03:36.360517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.295 [2024-11-20 14:03:36.360537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:39.295 [2024-11-20 14:03:36.364444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.295 [2024-11-20 14:03:36.364592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.295 [2024-11-20 14:03:36.364613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:39.295 [2024-11-20 14:03:36.368417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.295 [2024-11-20 14:03:36.368616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.295 [2024-11-20 14:03:36.368638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:39.295 [2024-11-20 14:03:36.372110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.295 [2024-11-20 14:03:36.372500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.295 [2024-11-20 14:03:36.372528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:39.295 [2024-11-20 14:03:36.375986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.295 [2024-11-20 14:03:36.376144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.295 [2024-11-20 14:03:36.376167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:39.295 [2024-11-20 14:03:36.379935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.296 [2024-11-20 14:03:36.380017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.296 [2024-11-20 14:03:36.380041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:39.296 [2024-11-20 14:03:36.383934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.296 [2024-11-20 14:03:36.384139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.296 [2024-11-20 14:03:36.384160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:39.296 [2024-11-20 14:03:36.387528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.296 [2024-11-20 14:03:36.387923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.296 [2024-11-20 14:03:36.387947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:39.296 [2024-11-20 14:03:36.391455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.296 [2024-11-20 14:03:36.391585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.296 [2024-11-20 14:03:36.391608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:39.296 [2024-11-20 14:03:36.395453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.296 [2024-11-20 14:03:36.395540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.296 [2024-11-20 14:03:36.395563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:39.296 [2024-11-20 14:03:36.399393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.296 [2024-11-20 14:03:36.399571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.296 [2024-11-20 14:03:36.399593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:39.296 [2024-11-20 14:03:36.402812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.296 [2024-11-20 14:03:36.403185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.296 [2024-11-20 14:03:36.403212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:39.296 [2024-11-20 14:03:36.406574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.296 [2024-11-20 14:03:36.406768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.296 [2024-11-20 14:03:36.406790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:39.296 [2024-11-20 14:03:36.410576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.296 [2024-11-20 14:03:36.410775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.296 [2024-11-20 14:03:36.410797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:39.296 [2024-11-20 14:03:36.414556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.296 [2024-11-20 14:03:36.414659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.296 [2024-11-20 14:03:36.414681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:39.296 [2024-11-20 14:03:36.418040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.296 [2024-11-20 14:03:36.418415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.296 [2024-11-20 14:03:36.418444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:39.296 [2024-11-20 14:03:36.422100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.296 [2024-11-20 14:03:36.422301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.296 [2024-11-20 14:03:36.422323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:39.296 [2024-11-20 14:03:36.426138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.296 [2024-11-20 14:03:36.426259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.296 [2024-11-20 14:03:36.426282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:39.296 [2024-11-20 14:03:36.430155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.296 [2024-11-20 14:03:36.430258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.296 [2024-11-20 14:03:36.430281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:39.296 [2024-11-20 14:03:36.434275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.296 [2024-11-20 14:03:36.434367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.296 [2024-11-20 14:03:36.434392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:39.296 [2024-11-20 14:03:36.438030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.296 [2024-11-20 14:03:36.438538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.296 [2024-11-20 14:03:36.438571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:39.296 [2024-11-20 14:03:36.442049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.296 [2024-11-20 14:03:36.442143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.296 [2024-11-20 14:03:36.442168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:39.296 [2024-11-20 14:03:36.446027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.296 [2024-11-20 14:03:36.446202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.296 [2024-11-20 14:03:36.446226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:39.296 [2024-11-20 14:03:36.450089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.296 [2024-11-20 14:03:36.450196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.296 [2024-11-20 14:03:36.450220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:39.296 [2024-11-20 14:03:36.453618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.296 [2024-11-20 14:03:36.454013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.296 [2024-11-20 14:03:36.454041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:39.296 [2024-11-20 14:03:36.457592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.296 [2024-11-20 14:03:36.457710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.296 [2024-11-20 14:03:36.457746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:39.296 [2024-11-20 14:03:36.461732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.296 [2024-11-20 14:03:36.461849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.296 [2024-11-20 14:03:36.461869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:39.296 [2024-11-20 14:03:36.465715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.296 [2024-11-20 14:03:36.465807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.296 [2024-11-20 14:03:36.465827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:39.296 [2024-11-20 14:03:36.469648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.296 [2024-11-20 14:03:36.469877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.296 [2024-11-20 14:03:36.469898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:39.296 [2024-11-20 14:03:36.473206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.296 [2024-11-20 14:03:36.473597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.296 [2024-11-20 14:03:36.473622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:39.296 [2024-11-20 14:03:36.477108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.296 [2024-11-20 14:03:36.477224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.296 [2024-11-20 14:03:36.477243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:39.296 [2024-11-20 14:03:36.481006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.296 [2024-11-20 14:03:36.481120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.296 [2024-11-20 14:03:36.481141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:39.296 [2024-11-20 14:03:36.484979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.297 [2024-11-20 14:03:36.485139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.297 [2024-11-20 14:03:36.485159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:39.297 [2024-11-20 14:03:36.488917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.297 [2024-11-20 14:03:36.489052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.297 [2024-11-20 14:03:36.489071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:39.297 [2024-11-20 14:03:36.492314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.297 [2024-11-20 14:03:36.492434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.297 [2024-11-20 14:03:36.492456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:39.297 [2024-11-20 14:03:36.496373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.297 [2024-11-20 14:03:36.496505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.297 [2024-11-20 14:03:36.496525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:39.297 [2024-11-20 14:03:36.500419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.297 [2024-11-20 14:03:36.500570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.297 [2024-11-20 14:03:36.500590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:39.297 [2024-11-20 14:03:36.504377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.297 [2024-11-20 14:03:36.504551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.297 [2024-11-20 14:03:36.504571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:39.297 [2024-11-20 14:03:36.507859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.297 [2024-11-20 14:03:36.508213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.297 [2024-11-20 14:03:36.508239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:39.297 [2024-11-20 14:03:36.511571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.297 [2024-11-20 14:03:36.511794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.297 [2024-11-20 14:03:36.511816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:39.297 [2024-11-20 14:03:36.515468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.297 [2024-11-20 14:03:36.515557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.297 [2024-11-20 14:03:36.515580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:39.297 [2024-11-20 14:03:36.519510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.297 [2024-11-20 14:03:36.519619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.297 [2024-11-20 14:03:36.519642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:39.297 [2024-11-20 14:03:36.522964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.297 [2024-11-20 14:03:36.523312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.297 [2024-11-20 14:03:36.523338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:39.297 [2024-11-20 14:03:36.526778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.297 [2024-11-20 14:03:36.526914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.297 [2024-11-20 14:03:36.526959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:39.297 [2024-11-20 14:03:36.530725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.297 [2024-11-20 14:03:36.530801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.297 [2024-11-20 14:03:36.530821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:39.297 [2024-11-20 14:03:36.534672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.297 [2024-11-20 14:03:36.534977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.297 [2024-11-20 14:03:36.535001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:39.297 [2024-11-20 14:03:36.538295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.297 [2024-11-20 14:03:36.538656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.297 [2024-11-20 14:03:36.538677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:39.297 [2024-11-20 14:03:36.542202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.297 [2024-11-20 14:03:36.542265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.297 [2024-11-20 14:03:36.542285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:39.297 [2024-11-20 14:03:36.546153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.297 [2024-11-20 14:03:36.546218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.297 [2024-11-20 14:03:36.546238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:39.297 [2024-11-20 14:03:36.550103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.297 [2024-11-20 14:03:36.550266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.297 [2024-11-20 14:03:36.550289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:39.297 [2024-11-20 14:03:36.553783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.297 [2024-11-20 14:03:36.554130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.297 [2024-11-20 14:03:36.554158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:39.297 [2024-11-20 14:03:36.557771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.297 [2024-11-20 14:03:36.557905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.297 [2024-11-20 14:03:36.557926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:39.297 [2024-11-20 14:03:36.561806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.297 [2024-11-20 14:03:36.561904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.297 [2024-11-20 14:03:36.561926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:39.297 [2024-11-20 14:03:36.565834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.297 [2024-11-20 14:03:36.565910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.297 [2024-11-20 14:03:36.565932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:39.297 [2024-11-20 14:03:36.570003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.297 [2024-11-20 14:03:36.570137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.297 [2024-11-20 14:03:36.570160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:39.297 [2024-11-20 14:03:36.574078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.297 [2024-11-20 14:03:36.574148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.297 [2024-11-20 14:03:36.574170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:39.297 [2024-11-20 14:03:36.577681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.297 [2024-11-20 14:03:36.578200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.297 [2024-11-20 14:03:36.578238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:39.297 [2024-11-20 14:03:36.581771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.297 [2024-11-20 14:03:36.581853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.297 [2024-11-20 14:03:36.581874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:39.297 [2024-11-20 14:03:36.585932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.297 [2024-11-20 14:03:36.586059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.297 [2024-11-20 14:03:36.586082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:39.297 [2024-11-20 14:03:36.589997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.298 [2024-11-20 14:03:36.590099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.298 [2024-11-20 14:03:36.590120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:39.298 [2024-11-20 14:03:36.593548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.298 [2024-11-20 14:03:36.593928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.298 [2024-11-20 14:03:36.593955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:39.298 [2024-11-20 14:03:36.597455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.298 [2024-11-20 14:03:36.597602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.298 [2024-11-20 14:03:36.597624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:39.298 [2024-11-20 14:03:36.601474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.298 [2024-11-20 14:03:36.601538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.298 [2024-11-20 14:03:36.601558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:39.298 [2024-11-20 14:03:36.605505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.298 [2024-11-20 14:03:36.605601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.298 [2024-11-20 14:03:36.605620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:39.298 [2024-11-20 14:03:36.609523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.298 [2024-11-20 14:03:36.609678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.298 [2024-11-20 14:03:36.609698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:39.298 [2024-11-20 14:03:36.613555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.298 [2024-11-20 14:03:36.613622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.298 [2024-11-20 14:03:36.613642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:39.560 [2024-11-20 14:03:36.617080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.560 [2024-11-20 14:03:36.617540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.560 [2024-11-20 14:03:36.617566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:39.560 [2024-11-20 14:03:36.620887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.560 [2024-11-20 14:03:36.620996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.560 [2024-11-20 14:03:36.621029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:39.560 [2024-11-20 14:03:36.624787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.560 [2024-11-20 14:03:36.624981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.560 [2024-11-20 14:03:36.625014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:39.560 [2024-11-20 14:03:36.628799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.560 [2024-11-20 14:03:36.628902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.560 [2024-11-20 14:03:36.628924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:39.560 [2024-11-20 14:03:36.632242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.560 [2024-11-20 14:03:36.632615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.560 [2024-11-20 14:03:36.632637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:39.560 [2024-11-20 14:03:36.636142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.560 [2024-11-20 14:03:36.636280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.560 [2024-11-20 14:03:36.636301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:39.560 [2024-11-20 14:03:36.640035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.560 [2024-11-20 14:03:36.640175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.560 [2024-11-20 14:03:36.640195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:39.560 [2024-11-20 14:03:36.643915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.560 [2024-11-20 14:03:36.644012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.560 [2024-11-20 14:03:36.644033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:39.560 [2024-11-20 14:03:36.647852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.560 [2024-11-20 14:03:36.647929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.560 [2024-11-20 14:03:36.647950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:39.560 [2024-11-20 14:03:36.651452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.560 [2024-11-20 14:03:36.652021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.560 [2024-11-20 14:03:36.652054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:39.560 [2024-11-20 14:03:36.655536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.560 [2024-11-20 14:03:36.655687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.560 [2024-11-20 14:03:36.655719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:39.560 [2024-11-20 14:03:36.659600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.560 [2024-11-20 14:03:36.659781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.560 [2024-11-20 14:03:36.659802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:39.560 [2024-11-20 14:03:36.663612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.560 [2024-11-20 14:03:36.663698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.560 [2024-11-20 14:03:36.663734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:39.560 [2024-11-20 14:03:36.667269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.560 [2024-11-20 14:03:36.667582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.560 [2024-11-20 14:03:36.667613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:39.560 [2024-11-20 14:03:36.671263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.560 [2024-11-20 14:03:36.671446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.560 [2024-11-20 14:03:36.671470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:39.560 [2024-11-20 14:03:36.675373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.560 [2024-11-20 14:03:36.675443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.560 [2024-11-20 14:03:36.675467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:39.560 [2024-11-20 14:03:36.679531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.560 [2024-11-20 14:03:36.679631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.561 [2024-11-20 14:03:36.679653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:39.561 [2024-11-20 14:03:36.683659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.561 [2024-11-20 14:03:36.683753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.561 [2024-11-20 14:03:36.683777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:39.561 [2024-11-20 14:03:36.687454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.561 [2024-11-20 14:03:36.687935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.561 [2024-11-20 14:03:36.687972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:39.561 [2024-11-20 14:03:36.691589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.561 [2024-11-20 14:03:36.691695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.561 [2024-11-20 14:03:36.691734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:39.561 [2024-11-20 14:03:36.695757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.561 [2024-11-20 14:03:36.695955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.561 [2024-11-20 14:03:36.695979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:39.561 [2024-11-20 14:03:36.700162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.561 [2024-11-20 14:03:36.700338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.561 [2024-11-20 14:03:36.700362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:39.561 [2024-11-20 14:03:36.704675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.561 [2024-11-20 14:03:36.704905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.561 [2024-11-20 14:03:36.704930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:39.561 [2024-11-20 14:03:36.708781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.561 [2024-11-20 14:03:36.709214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.561 [2024-11-20 14:03:36.709246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:39.561 [2024-11-20 14:03:36.713278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.561 [2024-11-20 14:03:36.713464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.561 [2024-11-20 14:03:36.713489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:39.561 [2024-11-20 14:03:36.717729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.561 [2024-11-20 14:03:36.717886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.561 [2024-11-20 14:03:36.717911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:39.561 [2024-11-20 14:03:36.722301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.561 [2024-11-20 14:03:36.722424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.561 [2024-11-20 14:03:36.722449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:39.561 [2024-11-20 14:03:36.726702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.561 [2024-11-20 14:03:36.726960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.561 [2024-11-20 14:03:36.726984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:39.561 [2024-11-20 14:03:36.730697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.561 [2024-11-20 14:03:36.731132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.561 [2024-11-20 14:03:36.731156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:39.561 [2024-11-20 14:03:36.735229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.561 [2024-11-20 14:03:36.735385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.561 [2024-11-20 14:03:36.735410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:39.561 [2024-11-20 14:03:36.739547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.561 [2024-11-20 14:03:36.739660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.561 [2024-11-20 14:03:36.739685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:39.561 [2024-11-20 14:03:36.743913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.561 [2024-11-20 14:03:36.744066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.561 [2024-11-20 14:03:36.744090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:39.561 [2024-11-20 14:03:36.748261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.561 [2024-11-20 14:03:36.748470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.561 [2024-11-20 14:03:36.748494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:39.561 [2024-11-20 14:03:36.752323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.561 [2024-11-20 14:03:36.752651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.561 [2024-11-20 14:03:36.752674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:39.561 [2024-11-20 14:03:36.756526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.561 [2024-11-20 14:03:36.756649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.561 [2024-11-20 14:03:36.756671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:39.561 [2024-11-20 14:03:36.760940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.561 [2024-11-20 14:03:36.761107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.561 [2024-11-20 14:03:36.761134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:39.561 [2024-11-20 14:03:36.765367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.561 [2024-11-20 14:03:36.765522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.561 [2024-11-20 14:03:36.765547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:39.561 [2024-11-20 14:03:36.769594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.561 [2024-11-20 14:03:36.769723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.561 [2024-11-20 14:03:36.769749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:39.561 [2024-11-20 14:03:36.773518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.561 [2024-11-20 14:03:36.773697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.562 [2024-11-20 14:03:36.773737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:39.562 [2024-11-20 14:03:36.777651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.562 [2024-11-20 14:03:36.777753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.562 [2024-11-20 14:03:36.777777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:39.562 [2024-11-20 14:03:36.781631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.562 [2024-11-20 14:03:36.781735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.562 [2024-11-20 14:03:36.781759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:39.562 [2024-11-20 14:03:36.785334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.562 [2024-11-20 14:03:36.785651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.562 [2024-11-20 14:03:36.785675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:39.562 [2024-11-20 14:03:36.789323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.562 [2024-11-20 14:03:36.789474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.562 [2024-11-20 14:03:36.789495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:39.562 [2024-11-20 14:03:36.793471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.562 [2024-11-20 14:03:36.793612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.562 [2024-11-20 14:03:36.793636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:39.562 [2024-11-20 14:03:36.797732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.562 [2024-11-20 14:03:36.797890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.562 [2024-11-20 14:03:36.797913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:39.562 [2024-11-20 14:03:36.801816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.562 [2024-11-20 14:03:36.801892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.562 [2024-11-20 14:03:36.801915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:39.562 [2024-11-20 14:03:36.805409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.562 [2024-11-20 14:03:36.805876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.562 [2024-11-20 14:03:36.805902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:39.562 [2024-11-20 14:03:36.809446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.562 [2024-11-20 14:03:36.809516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.562 [2024-11-20 14:03:36.809539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:39.562 [2024-11-20 14:03:36.813680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.562 [2024-11-20 14:03:36.813875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.562 [2024-11-20 14:03:36.813898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:39.562 [2024-11-20 14:03:36.817636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.562 [2024-11-20 14:03:36.817741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.562 [2024-11-20 14:03:36.817763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:39.562 [2024-11-20 14:03:36.821250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.562 [2024-11-20 14:03:36.821592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.562 [2024-11-20 14:03:36.821614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:39.562 [2024-11-20 14:03:36.825496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.562 [2024-11-20 14:03:36.825579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.562 [2024-11-20 14:03:36.825603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:39.562 [2024-11-20 14:03:36.829473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.562 [2024-11-20 14:03:36.829622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.562 [2024-11-20 14:03:36.829644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:39.562 [2024-11-20 14:03:36.833534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.562 [2024-11-20 14:03:36.833623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.562 [2024-11-20 14:03:36.833645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:39.562 [2024-11-20 14:03:36.837790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.562 [2024-11-20 14:03:36.837887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.562 [2024-11-20 14:03:36.837909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:39.562 [2024-11-20 14:03:36.841362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.562 [2024-11-20 14:03:36.841873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.562 [2024-11-20 14:03:36.841900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:39.562 [2024-11-20 14:03:36.845413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.562 [2024-11-20 14:03:36.845553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.562 [2024-11-20 14:03:36.845576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:39.562 [2024-11-20 14:03:36.849519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.562 [2024-11-20 14:03:36.849623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.562 [2024-11-20 14:03:36.849645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:39.562 [2024-11-20 14:03:36.853639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.562 [2024-11-20 14:03:36.853833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.562 [2024-11-20 14:03:36.853855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:39.562 [2024-11-20 14:03:36.857756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.562 [2024-11-20 14:03:36.857978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.562 [2024-11-20 14:03:36.858000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:39.562 [2024-11-20 14:03:36.861333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.562 [2024-11-20 14:03:36.861790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.562 [2024-11-20 14:03:36.861817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:39.562 [2024-11-20 14:03:36.865362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.562 [2024-11-20 14:03:36.865536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.563 [2024-11-20 14:03:36.865558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:39.563 [2024-11-20 14:03:36.869450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.563 [2024-11-20 14:03:36.869611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.563 [2024-11-20 14:03:36.869633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:39.563 [2024-11-20 14:03:36.873539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.563 [2024-11-20 14:03:36.873711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.563 [2024-11-20 14:03:36.873746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:39.563 [2024-11-20 14:03:36.877638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.563 [2024-11-20 14:03:36.877759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.563 [2024-11-20 14:03:36.877781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:39.824 [2024-11-20 14:03:36.881062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.824 [2024-11-20 14:03:36.881167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.824 [2024-11-20 14:03:36.881190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:39.824 [2024-11-20 14:03:36.885187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.824 [2024-11-20 14:03:36.885327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.824 [2024-11-20 14:03:36.885348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:39.824 [2024-11-20 14:03:36.889322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.824 [2024-11-20 14:03:36.889428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.824 [2024-11-20 14:03:36.889449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:39.824 [2024-11-20 14:03:36.893492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.824 [2024-11-20 14:03:36.893695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.824 [2024-11-20 14:03:36.893720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:39.824 [2024-11-20 14:03:36.897297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.824 [2024-11-20 14:03:36.897624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.824 [2024-11-20 14:03:36.897647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:39.824 [2024-11-20 14:03:36.901215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.824 [2024-11-20 14:03:36.901345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.824 [2024-11-20 14:03:36.901368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:39.824 [2024-11-20 14:03:36.905351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.824 [2024-11-20 14:03:36.905462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.824 [2024-11-20 14:03:36.905487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:39.824 [2024-11-20 14:03:36.909609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.824 [2024-11-20 14:03:36.909765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.824 [2024-11-20 14:03:36.909788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:39.824 [2024-11-20 14:03:36.913706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.824 [2024-11-20 14:03:36.913848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.824 [2024-11-20 14:03:36.913871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:39.824 [2024-11-20 14:03:36.917323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.824 [2024-11-20 14:03:36.917421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.824 [2024-11-20 14:03:36.917444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:39.824 [2024-11-20 14:03:36.921448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.824 [2024-11-20 14:03:36.921622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.824 [2024-11-20 14:03:36.921642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:39.824 [2024-11-20 14:03:36.925685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.824 [2024-11-20 14:03:36.925789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.824 [2024-11-20 14:03:36.925810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:39.824 [2024-11-20 14:03:36.929747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.824 [2024-11-20 14:03:36.929970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.824 [2024-11-20 14:03:36.929990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:39.824 [2024-11-20 14:03:36.933370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.824 [2024-11-20 14:03:36.933774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.824 [2024-11-20 14:03:36.933800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:39.824 [2024-11-20 14:03:36.937512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.824 [2024-11-20 14:03:36.937663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.824 [2024-11-20 14:03:36.937686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:39.824 [2024-11-20 14:03:36.941766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.824 [2024-11-20 14:03:36.941936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.824 [2024-11-20 14:03:36.941958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:39.824 [2024-11-20 14:03:36.945850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.825 [2024-11-20 14:03:36.946035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.825 [2024-11-20 14:03:36.946057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:39.825 [2024-11-20 14:03:36.949866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.825 [2024-11-20 14:03:36.949977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.825 [2024-11-20 14:03:36.949998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:39.825 [2024-11-20 14:03:36.953291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.825 [2024-11-20 14:03:36.953396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.825 [2024-11-20 14:03:36.953416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:39.825 [2024-11-20 14:03:36.957421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.825 [2024-11-20 14:03:36.957527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.825 [2024-11-20 14:03:36.957549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:39.825 [2024-11-20 14:03:36.961519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.825 [2024-11-20 14:03:36.961648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.825 [2024-11-20 14:03:36.961669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:39.825 [2024-11-20 14:03:36.965664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.825 [2024-11-20 14:03:36.965872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.825 [2024-11-20 14:03:36.965900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:39.825 [2024-11-20 14:03:36.969378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.825 [2024-11-20 14:03:36.969691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.825 [2024-11-20 14:03:36.969731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:39.825 [2024-11-20 14:03:36.973236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.825 [2024-11-20 14:03:36.973364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.825 [2024-11-20 14:03:36.973387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:39.825 [2024-11-20 14:03:36.977288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.825 [2024-11-20 14:03:36.977384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.825 [2024-11-20 14:03:36.977408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:39.825 [2024-11-20 14:03:36.981399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.825 [2024-11-20 14:03:36.981548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.825 [2024-11-20 14:03:36.981570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:39.825 [2024-11-20 14:03:36.985358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.825 [2024-11-20 14:03:36.985465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.825 [2024-11-20 14:03:36.985488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:39.825 [2024-11-20 14:03:36.988812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.825 [2024-11-20 14:03:36.988897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.825 [2024-11-20 14:03:36.988918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:39.825 [2024-11-20 14:03:36.992678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.825 [2024-11-20 14:03:36.992832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.825 [2024-11-20 14:03:36.992853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:39.825 [2024-11-20 14:03:36.996844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.825 [2024-11-20 14:03:36.996933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.825 [2024-11-20 14:03:36.996954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:39.825 [2024-11-20 14:03:37.000796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.825 [2024-11-20 14:03:37.001028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.825 [2024-11-20 14:03:37.001049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:39.825 [2024-11-20 14:03:37.004363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.825 [2024-11-20 14:03:37.004774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.825 [2024-11-20 14:03:37.004801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:39.825 [2024-11-20 14:03:37.008319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.825 [2024-11-20 14:03:37.008465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.825 [2024-11-20 14:03:37.008486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:39.825 [2024-11-20 14:03:37.012311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.825 [2024-11-20 14:03:37.012419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.825 [2024-11-20 14:03:37.012443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:39.825 [2024-11-20 14:03:37.016460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.825 [2024-11-20 14:03:37.016639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.825 [2024-11-20 14:03:37.016661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:39.825 [2024-11-20 14:03:37.020667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.825 [2024-11-20 14:03:37.020785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.825 [2024-11-20 14:03:37.020807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:39.825 [2024-11-20 14:03:37.024021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.825 [2024-11-20 14:03:37.024114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.825 [2024-11-20 14:03:37.024137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:39.825 [2024-11-20 14:03:37.027983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.825 [2024-11-20 14:03:37.028092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.825 [2024-11-20 14:03:37.028113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:39.825 [2024-11-20 14:03:37.031973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.825 [2024-11-20 14:03:37.032099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.825 [2024-11-20 14:03:37.032120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:39.825 [2024-11-20 14:03:37.036031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.825 [2024-11-20 14:03:37.036136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.825 [2024-11-20 14:03:37.036158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:39.825 [2024-11-20 14:03:37.040198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.825 [2024-11-20 14:03:37.040275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.825 [2024-11-20 14:03:37.040298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:39.825 [2024-11-20 14:03:37.043747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.825 [2024-11-20 14:03:37.044236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.825 [2024-11-20 14:03:37.044275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:39.825 [2024-11-20 14:03:37.047690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.825 [2024-11-20 14:03:37.047849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.825 [2024-11-20 14:03:37.047874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:39.825 [2024-11-20 14:03:37.051785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.826 [2024-11-20 14:03:37.051976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.826 [2024-11-20 14:03:37.052001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:39.826 [2024-11-20 14:03:37.055751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.826 [2024-11-20 14:03:37.055845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.826 [2024-11-20 14:03:37.055867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:39.826 [2024-11-20 14:03:37.059261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.826 [2024-11-20 14:03:37.059616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.826 [2024-11-20 14:03:37.059640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:39.826 [2024-11-20 14:03:37.063126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.826 [2024-11-20 14:03:37.063287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.826 [2024-11-20 14:03:37.063311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:39.826 [2024-11-20 14:03:37.067039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.826 [2024-11-20 14:03:37.067109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.826 [2024-11-20 14:03:37.067131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:39.826 [2024-11-20 14:03:37.071072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.826 [2024-11-20 14:03:37.071146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.826 [2024-11-20 14:03:37.071169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:39.826 [2024-11-20 14:03:37.075052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.826 [2024-11-20 14:03:37.075144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.826 [2024-11-20 14:03:37.075169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:39.826 [2024-11-20 14:03:37.078588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.826 [2024-11-20 14:03:37.078993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.826 [2024-11-20 14:03:37.079018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:39.826 [2024-11-20 14:03:37.082583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.826 [2024-11-20 14:03:37.082730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.826 [2024-11-20 14:03:37.082756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:39.826 [2024-11-20 14:03:37.086530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.826 [2024-11-20 14:03:37.086671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.826 [2024-11-20 14:03:37.086692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:39.826 [2024-11-20 14:03:37.090722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.826 [2024-11-20 14:03:37.090857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.826 [2024-11-20 14:03:37.090881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:39.826 [2024-11-20 14:03:37.094736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.826 [2024-11-20 14:03:37.094829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.826 [2024-11-20 14:03:37.094853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:39.826 [2024-11-20 14:03:37.098273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.826 [2024-11-20 14:03:37.098811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.826 [2024-11-20 14:03:37.098844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:39.826 [2024-11-20 14:03:37.102223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.826 [2024-11-20 14:03:37.102357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.826 [2024-11-20 14:03:37.102379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:39.826 [2024-11-20 14:03:37.106305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.826 [2024-11-20 14:03:37.106407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.826 [2024-11-20 14:03:37.106431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:39.826 [2024-11-20 14:03:37.110348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.826 [2024-11-20 14:03:37.110451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.826 [2024-11-20 14:03:37.110474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:39.826 [2024-11-20 14:03:37.113937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.826 [2024-11-20 14:03:37.114270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.826 [2024-11-20 14:03:37.114299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:39.826 [2024-11-20 14:03:37.117855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.826 [2024-11-20 14:03:37.118024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.826 [2024-11-20 14:03:37.118047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:39.826 [2024-11-20 14:03:37.121906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.826 [2024-11-20 14:03:37.122082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.826 [2024-11-20 14:03:37.122105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:39.826 [2024-11-20 14:03:37.125959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.826 [2024-11-20 14:03:37.126118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.826 [2024-11-20 14:03:37.126140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:39.826 [2024-11-20 14:03:37.129975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.826 [2024-11-20 14:03:37.130055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.826 [2024-11-20 14:03:37.130078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:39.826 7763.00 IOPS, 970.38 MiB/s [2024-11-20T14:03:37.149Z] [2024-11-20 14:03:37.134873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.826 [2024-11-20 14:03:37.135373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.826 [2024-11-20 14:03:37.135403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:39.826 [2024-11-20 14:03:37.138883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.826 [2024-11-20 14:03:37.139024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.826 [2024-11-20 14:03:37.139046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:39.826 [2024-11-20 14:03:37.142862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:39.826 [2024-11-20 14:03:37.142961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:39.826 [2024-11-20 14:03:37.142983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:40.088 [2024-11-20 14:03:37.146814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.088 [2024-11-20 14:03:37.147022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.088 [2024-11-20 14:03:37.147045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:40.088 [2024-11-20 14:03:37.150324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.088 [2024-11-20 14:03:37.150675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.088 [2024-11-20 14:03:37.150697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:40.088 [2024-11-20 14:03:37.154159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.088 [2024-11-20 14:03:37.154298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.088 [2024-11-20 14:03:37.154321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:40.088 [2024-11-20 14:03:37.158137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.088 [2024-11-20 14:03:37.158231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.088 [2024-11-20 14:03:37.158254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:40.088 [2024-11-20 14:03:37.162278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.088 [2024-11-20 14:03:37.162443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.088 [2024-11-20 14:03:37.162465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:40.088 [2024-11-20 14:03:37.166314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.088 [2024-11-20 14:03:37.166445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.088 [2024-11-20 14:03:37.166468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:40.088 [2024-11-20 14:03:37.169789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.088 [2024-11-20 14:03:37.169854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.088 [2024-11-20 14:03:37.169875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:40.088 [2024-11-20 14:03:37.173964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.088 [2024-11-20 14:03:37.174075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.088 [2024-11-20 14:03:37.174099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:40.088 [2024-11-20 14:03:37.178805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.088 [2024-11-20 14:03:37.179047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.088 [2024-11-20 14:03:37.179075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:40.088 [2024-11-20 14:03:37.183576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.089 [2024-11-20 14:03:37.184006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.089 [2024-11-20 14:03:37.184058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:40.089 [2024-11-20 14:03:37.187966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.089 [2024-11-20 14:03:37.188402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.089 [2024-11-20 14:03:37.188451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:40.089 [2024-11-20 14:03:37.192487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.089 [2024-11-20 14:03:37.192569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.089 [2024-11-20 14:03:37.192595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:40.089 [2024-11-20 14:03:37.197191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.089 [2024-11-20 14:03:37.197259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.089 [2024-11-20 14:03:37.197286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:40.089 [2024-11-20 14:03:37.202053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.089 [2024-11-20 14:03:37.202249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.089 [2024-11-20 14:03:37.202274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:40.089 [2024-11-20 14:03:37.206790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.089 [2024-11-20 14:03:37.207032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.089 [2024-11-20 14:03:37.207070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:40.089 [2024-11-20 14:03:37.211745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.089 [2024-11-20 14:03:37.211853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.089 [2024-11-20 14:03:37.211877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:40.089 [2024-11-20 14:03:37.216531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.089 [2024-11-20 14:03:37.216752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.089 [2024-11-20 14:03:37.216778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:40.089 [2024-11-20 14:03:37.220893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.089 [2024-11-20 14:03:37.221314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.089 [2024-11-20 14:03:37.221366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:40.089 [2024-11-20 14:03:37.225174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.089 [2024-11-20 14:03:37.225326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.089 [2024-11-20 14:03:37.225351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:40.089 [2024-11-20 14:03:37.229564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.089 [2024-11-20 14:03:37.229753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.089 [2024-11-20 14:03:37.229778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:40.089 [2024-11-20 14:03:37.234019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.089 [2024-11-20 14:03:37.234188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.089 [2024-11-20 14:03:37.234211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:40.089 [2024-11-20 14:03:37.238843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.089 [2024-11-20 14:03:37.239125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.089 [2024-11-20 14:03:37.239152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:40.089 [2024-11-20 14:03:37.243002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.089 [2024-11-20 14:03:37.243473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.089 [2024-11-20 14:03:37.243589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:40.089 [2024-11-20 14:03:37.247592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.089 [2024-11-20 14:03:37.247749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.089 [2024-11-20 14:03:37.247778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:40.089 [2024-11-20 14:03:37.251850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.089 [2024-11-20 14:03:37.251956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.089 [2024-11-20 14:03:37.251980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:40.089 [2024-11-20 14:03:37.256084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.089 [2024-11-20 14:03:37.256242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.089 [2024-11-20 14:03:37.256264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:40.089 [2024-11-20 14:03:37.259656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.089 [2024-11-20 14:03:37.259903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.089 [2024-11-20 14:03:37.259926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:40.089 [2024-11-20 14:03:37.263405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.089 [2024-11-20 14:03:37.263470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.089 [2024-11-20 14:03:37.263493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:40.089 [2024-11-20 14:03:37.267424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.089 [2024-11-20 14:03:37.267556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.089 [2024-11-20 14:03:37.267579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:40.089 [2024-11-20 14:03:37.271429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.089 [2024-11-20 14:03:37.271594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.089 [2024-11-20 14:03:37.271617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:40.089 [2024-11-20 14:03:37.275387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.090 [2024-11-20 14:03:37.275500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.090 [2024-11-20 14:03:37.275523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:40.090 [2024-11-20 14:03:37.278891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.090 [2024-11-20 14:03:37.279034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.090 [2024-11-20 14:03:37.279062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:40.090 [2024-11-20 14:03:37.282991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.090 [2024-11-20 14:03:37.283160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.090 [2024-11-20 14:03:37.283185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:40.090 [2024-11-20 14:03:37.287031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.090 [2024-11-20 14:03:37.287124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.090 [2024-11-20 14:03:37.287149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:40.090 [2024-11-20 14:03:37.290983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.090 [2024-11-20 14:03:37.291162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.090 [2024-11-20 14:03:37.291188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:40.090 [2024-11-20 14:03:37.294349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.090 [2024-11-20 14:03:37.294717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.090 [2024-11-20 14:03:37.294747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:40.090 [2024-11-20 14:03:37.298246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.090 [2024-11-20 14:03:37.298363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.090 [2024-11-20 14:03:37.298388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:40.090 [2024-11-20 14:03:37.302157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.090 [2024-11-20 14:03:37.302254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.090 [2024-11-20 14:03:37.302279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:40.090 [2024-11-20 14:03:37.306105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.090 [2024-11-20 14:03:37.306267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.090 [2024-11-20 14:03:37.306291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:40.090 [2024-11-20 14:03:37.310027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.090 [2024-11-20 14:03:37.310135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.090 [2024-11-20 14:03:37.310158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:40.090 [2024-11-20 14:03:37.313280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.090 [2024-11-20 14:03:37.313434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.090 [2024-11-20 14:03:37.313457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:40.090 [2024-11-20 14:03:37.317055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.090 [2024-11-20 14:03:37.317135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.090 [2024-11-20 14:03:37.317158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:40.090 [2024-11-20 14:03:37.320820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.090 [2024-11-20 14:03:37.320959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.090 [2024-11-20 14:03:37.320981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:40.090 [2024-11-20 14:03:37.324676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.090 [2024-11-20 14:03:37.324765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.090 [2024-11-20 14:03:37.324789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:40.090 [2024-11-20 14:03:37.328049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.090 [2024-11-20 14:03:37.328235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.090 [2024-11-20 14:03:37.328258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:40.090 [2024-11-20 14:03:37.331872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.090 [2024-11-20 14:03:37.331949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.090 [2024-11-20 14:03:37.331973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:40.090 [2024-11-20 14:03:37.335618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.090 [2024-11-20 14:03:37.335684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.090 [2024-11-20 14:03:37.335726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:40.090 [2024-11-20 14:03:37.339152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.090 [2024-11-20 14:03:37.339550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.090 [2024-11-20 14:03:37.339584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:40.090 [2024-11-20 14:03:37.342908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.090 [2024-11-20 14:03:37.343065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.090 [2024-11-20 14:03:37.343089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:40.090 [2024-11-20 14:03:37.346818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.090 [2024-11-20 14:03:37.346894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.090 [2024-11-20 14:03:37.346917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:40.090 [2024-11-20 14:03:37.350717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.090 [2024-11-20 14:03:37.350927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.090 [2024-11-20 14:03:37.350960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:40.090 [2024-11-20 14:03:37.354209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.090 [2024-11-20 14:03:37.354572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.090 [2024-11-20 14:03:37.354602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:40.090 [2024-11-20 14:03:37.357984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.090 [2024-11-20 14:03:37.358107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.090 [2024-11-20 14:03:37.358130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:40.090 [2024-11-20 14:03:37.361802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.090 [2024-11-20 14:03:37.361900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.091 [2024-11-20 14:03:37.361922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:40.091 [2024-11-20 14:03:37.365660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.091 [2024-11-20 14:03:37.365832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.091 [2024-11-20 14:03:37.365854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:40.091 [2024-11-20 14:03:37.369533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.091 [2024-11-20 14:03:37.369648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.091 [2024-11-20 14:03:37.369671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:40.091 [2024-11-20 14:03:37.372965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.091 [2024-11-20 14:03:37.373140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.091 [2024-11-20 14:03:37.373166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:40.091 [2024-11-20 14:03:37.376902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.091 [2024-11-20 14:03:37.377101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.091 [2024-11-20 14:03:37.377126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:40.091 [2024-11-20 14:03:37.380849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.091 [2024-11-20 14:03:37.380926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.091 [2024-11-20 14:03:37.380951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:40.091 [2024-11-20 14:03:37.384246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.091 [2024-11-20 14:03:37.384553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.091 [2024-11-20 14:03:37.384584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:40.091 [2024-11-20 14:03:37.387974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.091 [2024-11-20 14:03:37.388039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.091 [2024-11-20 14:03:37.388062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:40.091 [2024-11-20 14:03:37.392014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.091 [2024-11-20 14:03:37.392089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.091 [2024-11-20 14:03:37.392113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:40.091 [2024-11-20 14:03:37.396031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.091 [2024-11-20 14:03:37.396134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.091 [2024-11-20 14:03:37.396159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:40.091 [2024-11-20 14:03:37.399947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.091 [2024-11-20 14:03:37.400174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.091 [2024-11-20 14:03:37.400198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:40.091 [2024-11-20 14:03:37.403444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.091 [2024-11-20 14:03:37.403926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.091 [2024-11-20 14:03:37.403964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:40.091 [2024-11-20 14:03:37.407411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.091 [2024-11-20 14:03:37.407538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.091 [2024-11-20 14:03:37.407563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:40.353 [2024-11-20 14:03:37.411357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.353 [2024-11-20 14:03:37.411429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.353 [2024-11-20 14:03:37.411456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:40.353 [2024-11-20 14:03:37.415268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.353 [2024-11-20 14:03:37.415365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.353 [2024-11-20 14:03:37.415390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:40.353 [2024-11-20 14:03:37.419175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.353 [2024-11-20 14:03:37.419242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.353 [2024-11-20 14:03:37.419266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:40.353 [2024-11-20 14:03:37.422668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.353 [2024-11-20 14:03:37.423175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.353 [2024-11-20 14:03:37.423209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:40.353 [2024-11-20 14:03:37.426521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.353 [2024-11-20 14:03:37.426621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.353 [2024-11-20 14:03:37.426642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:40.353 [2024-11-20 14:03:37.430517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.353 [2024-11-20 14:03:37.430664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.353 [2024-11-20 14:03:37.430685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:40.353 [2024-11-20 14:03:37.434463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.353 [2024-11-20 14:03:37.434537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.353 [2024-11-20 14:03:37.434559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:40.353 [2024-11-20 14:03:37.437821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.353 [2024-11-20 14:03:37.438145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.353 [2024-11-20 14:03:37.438172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:40.353 [2024-11-20 14:03:37.441544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.353 [2024-11-20 14:03:37.441679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.353 [2024-11-20 14:03:37.441702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:40.353 [2024-11-20 14:03:37.445377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.353 [2024-11-20 14:03:37.445430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.353 [2024-11-20 14:03:37.445452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:40.353 [2024-11-20 14:03:37.449364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.353 [2024-11-20 14:03:37.449500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.353 [2024-11-20 14:03:37.449523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:40.353 [2024-11-20 14:03:37.453260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.353 [2024-11-20 14:03:37.453323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.353 [2024-11-20 14:03:37.453345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:40.353 [2024-11-20 14:03:37.456763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.353 [2024-11-20 14:03:37.457176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.353 [2024-11-20 14:03:37.457208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:40.354 [2024-11-20 14:03:37.460526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.354 [2024-11-20 14:03:37.460658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.354 [2024-11-20 14:03:37.460681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:40.354 [2024-11-20 14:03:37.464624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.354 [2024-11-20 14:03:37.464698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.354 [2024-11-20 14:03:37.464739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:40.354 [2024-11-20 14:03:37.468617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.354 [2024-11-20 14:03:37.468801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.354 [2024-11-20 14:03:37.468824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:40.354 [2024-11-20 14:03:37.472028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.354 [2024-11-20 14:03:37.472446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.354 [2024-11-20 14:03:37.472479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:40.354 [2024-11-20 14:03:37.475987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.354 [2024-11-20 14:03:37.476106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.354 [2024-11-20 14:03:37.476127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:40.354 [2024-11-20 14:03:37.479902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.354 [2024-11-20 14:03:37.479993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.354 [2024-11-20 14:03:37.480015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:40.354 [2024-11-20 14:03:37.483814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.354 [2024-11-20 14:03:37.484003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.354 [2024-11-20 14:03:37.484024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:40.354 [2024-11-20 14:03:37.487658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.354 [2024-11-20 14:03:37.487778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.354 [2024-11-20 14:03:37.487799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:40.354 [2024-11-20 14:03:37.490984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.354 [2024-11-20 14:03:37.491122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.354 [2024-11-20 14:03:37.491143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:40.354 [2024-11-20 14:03:37.494875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.354 [2024-11-20 14:03:37.495016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.354 [2024-11-20 14:03:37.495038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:40.354 [2024-11-20 14:03:37.498923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.354 [2024-11-20 14:03:37.499010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.354 [2024-11-20 14:03:37.499034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:40.354 [2024-11-20 14:03:37.502814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.354 [2024-11-20 14:03:37.503037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.354 [2024-11-20 14:03:37.503060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:40.354 [2024-11-20 14:03:37.506266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.354 [2024-11-20 14:03:37.506615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.354 [2024-11-20 14:03:37.506645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:40.354 [2024-11-20 14:03:37.510003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.354 [2024-11-20 14:03:37.510130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.354 [2024-11-20 14:03:37.510152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:40.354 [2024-11-20 14:03:37.513870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.354 [2024-11-20 14:03:37.514057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.354 [2024-11-20 14:03:37.514080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:40.354 [2024-11-20 14:03:37.517644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.354 [2024-11-20 14:03:37.517736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.354 [2024-11-20 14:03:37.517759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:40.354 [2024-11-20 14:03:37.521054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.354 [2024-11-20 14:03:37.521393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.354 [2024-11-20 14:03:37.521423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:40.354 [2024-11-20 14:03:37.524907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.354 [2024-11-20 14:03:37.525094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.354 [2024-11-20 14:03:37.525115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:40.354 [2024-11-20 14:03:37.528792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.354 [2024-11-20 14:03:37.528875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.354 [2024-11-20 14:03:37.528896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:40.354 [2024-11-20 14:03:37.532735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.354 [2024-11-20 14:03:37.532847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.354 [2024-11-20 14:03:37.532868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:40.354 [2024-11-20 14:03:37.536573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.354 [2024-11-20 14:03:37.536645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.354 [2024-11-20 14:03:37.536667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:40.354 [2024-11-20 14:03:37.539870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.354 [2024-11-20 14:03:37.539964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.354 [2024-11-20 14:03:37.539988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:40.354 [2024-11-20 14:03:37.543743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.354 [2024-11-20 14:03:37.543846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.354 [2024-11-20 14:03:37.543868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:40.354 [2024-11-20 14:03:37.547627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.354 [2024-11-20 14:03:37.547708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.354 [2024-11-20 14:03:37.547744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:40.354 [2024-11-20 14:03:37.551504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.354 [2024-11-20 14:03:37.551685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.354 [2024-11-20 14:03:37.551707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:40.354 [2024-11-20 14:03:37.554949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.354 [2024-11-20 14:03:37.555323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.354 [2024-11-20 14:03:37.555351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:40.354 [2024-11-20 14:03:37.558659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.354 [2024-11-20 14:03:37.558809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.354 [2024-11-20 14:03:37.558830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:40.355 [2024-11-20 14:03:37.562498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.355 [2024-11-20 14:03:37.562642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.355 [2024-11-20 14:03:37.562663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:40.355 [2024-11-20 14:03:37.566251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.355 [2024-11-20 14:03:37.566307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.355 [2024-11-20 14:03:37.566328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:40.355 [2024-11-20 14:03:37.569739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.355 [2024-11-20 14:03:37.570158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.355 [2024-11-20 14:03:37.570187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:40.355 [2024-11-20 14:03:37.573589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.355 [2024-11-20 14:03:37.573656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.355 [2024-11-20 14:03:37.573678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:40.355 [2024-11-20 14:03:37.577486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.355 [2024-11-20 14:03:37.577575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.355 [2024-11-20 14:03:37.577597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:40.355 [2024-11-20 14:03:37.581378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.355 [2024-11-20 14:03:37.581443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.355 [2024-11-20 14:03:37.581465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:40.355 [2024-11-20 14:03:37.584810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.355 [2024-11-20 14:03:37.585272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.355 [2024-11-20 14:03:37.585305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:40.355 [2024-11-20 14:03:37.588735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.355 [2024-11-20 14:03:37.588860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.355 [2024-11-20 14:03:37.588883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:40.355 [2024-11-20 14:03:37.592630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.355 [2024-11-20 14:03:37.592783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.355 [2024-11-20 14:03:37.592804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:40.355 [2024-11-20 14:03:37.596599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.355 [2024-11-20 14:03:37.596662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.355 [2024-11-20 14:03:37.596683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:40.355 [2024-11-20 14:03:37.600100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.355 [2024-11-20 14:03:37.600551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.355 [2024-11-20 14:03:37.600615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:40.355 [2024-11-20 14:03:37.603938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.355 [2024-11-20 14:03:37.604010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.355 [2024-11-20 14:03:37.604034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:40.355 [2024-11-20 14:03:37.607866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.355 [2024-11-20 14:03:37.608042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.355 [2024-11-20 14:03:37.608089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:40.355 [2024-11-20 14:03:37.611781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.355 [2024-11-20 14:03:37.611879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.355 [2024-11-20 14:03:37.611901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:40.355 [2024-11-20 14:03:37.615150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.355 [2024-11-20 14:03:37.615484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.355 [2024-11-20 14:03:37.615505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:40.355 [2024-11-20 14:03:37.618978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.355 [2024-11-20 14:03:37.619125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.355 [2024-11-20 14:03:37.619147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:40.355 [2024-11-20 14:03:37.622864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.355 [2024-11-20 14:03:37.622920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.355 [2024-11-20 14:03:37.622951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:40.355 [2024-11-20 14:03:37.626890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.355 [2024-11-20 14:03:37.627060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.355 [2024-11-20 14:03:37.627082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:40.355 [2024-11-20 14:03:37.630714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.355 [2024-11-20 14:03:37.630788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.355 [2024-11-20 14:03:37.630809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:40.355 [2024-11-20 14:03:37.633960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.355 [2024-11-20 14:03:37.634097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.355 [2024-11-20 14:03:37.634118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:40.355 [2024-11-20 14:03:37.637692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.355 [2024-11-20 14:03:37.637772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.355 [2024-11-20 14:03:37.637792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:40.355 [2024-11-20 14:03:37.641498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.355 [2024-11-20 14:03:37.641630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.355 [2024-11-20 14:03:37.641651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:40.355 [2024-11-20 14:03:37.645309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.355 [2024-11-20 14:03:37.645374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.355 [2024-11-20 14:03:37.645394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:40.355 [2024-11-20 14:03:37.648709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.355 [2024-11-20 14:03:37.649187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.355 [2024-11-20 14:03:37.649220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:40.355 [2024-11-20 14:03:37.652541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.355 [2024-11-20 14:03:37.652657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.355 [2024-11-20 14:03:37.652678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:40.355 [2024-11-20 14:03:37.656520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.355 [2024-11-20 14:03:37.656592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.355 [2024-11-20 14:03:37.656613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:40.355 [2024-11-20 14:03:37.660452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.355 [2024-11-20 14:03:37.660647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.356 [2024-11-20 14:03:37.660669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:40.356 [2024-11-20 14:03:37.664017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.356 [2024-11-20 14:03:37.664383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.356 [2024-11-20 14:03:37.664405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:40.356 [2024-11-20 14:03:37.667843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.356 [2024-11-20 14:03:37.667960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.356 [2024-11-20 14:03:37.667984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:40.356 [2024-11-20 14:03:37.671752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.356 [2024-11-20 14:03:37.671863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.356 [2024-11-20 14:03:37.671890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:40.617 [2024-11-20 14:03:37.675754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.617 [2024-11-20 14:03:37.675906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.617 [2024-11-20 14:03:37.675930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:40.617 [2024-11-20 14:03:37.679631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.617 [2024-11-20 14:03:37.679748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.617 [2024-11-20 14:03:37.679772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:40.617 [2024-11-20 14:03:37.683020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.617 [2024-11-20 14:03:37.683102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.618 [2024-11-20 14:03:37.683125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:40.618 [2024-11-20 14:03:37.686883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.618 [2024-11-20 14:03:37.687065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.618 [2024-11-20 14:03:37.687087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:40.618 [2024-11-20 14:03:37.690875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.618 [2024-11-20 14:03:37.691029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.618 [2024-11-20 14:03:37.691054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:40.618 [2024-11-20 14:03:37.694848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.618 [2024-11-20 14:03:37.695074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.618 [2024-11-20 14:03:37.695097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:40.618 [2024-11-20 14:03:37.698292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.618 [2024-11-20 14:03:37.698634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.618 [2024-11-20 14:03:37.698667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:40.618 [2024-11-20 14:03:37.702176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.618 [2024-11-20 14:03:37.702264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.618 [2024-11-20 14:03:37.702287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:40.618 [2024-11-20 14:03:37.706341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.618 [2024-11-20 14:03:37.706462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.618 [2024-11-20 14:03:37.706488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:40.618 [2024-11-20 14:03:37.710214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.618 [2024-11-20 14:03:37.710298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.618 [2024-11-20 14:03:37.710321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:40.618 [2024-11-20 14:03:37.714102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.618 [2024-11-20 14:03:37.714296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.618 [2024-11-20 14:03:37.714317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:40.618 [2024-11-20 14:03:37.717574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.618 [2024-11-20 14:03:37.717935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.618 [2024-11-20 14:03:37.717968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:40.618 [2024-11-20 14:03:37.721446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.618 [2024-11-20 14:03:37.721568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.618 [2024-11-20 14:03:37.721591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:40.618 [2024-11-20 14:03:37.725413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.618 [2024-11-20 14:03:37.725507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.618 [2024-11-20 14:03:37.725528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:40.618 [2024-11-20 14:03:37.729501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.618 [2024-11-20 14:03:37.729660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.618 [2024-11-20 14:03:37.729681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:40.618 [2024-11-20 14:03:37.733379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.618 [2024-11-20 14:03:37.733494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.618 [2024-11-20 14:03:37.733515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:40.618 [2024-11-20 14:03:37.737328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.618 [2024-11-20 14:03:37.737398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.618 [2024-11-20 14:03:37.737427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:40.618 [2024-11-20 14:03:37.741884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.618 [2024-11-20 14:03:37.742108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.618 [2024-11-20 14:03:37.742133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:40.618 [2024-11-20 14:03:37.746118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.618 [2024-11-20 14:03:37.746320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.618 [2024-11-20 14:03:37.746344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:40.618 [2024-11-20 14:03:37.749669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.618 [2024-11-20 14:03:37.750017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.618 [2024-11-20 14:03:37.750049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:40.618 [2024-11-20 14:03:37.753528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.618 [2024-11-20 14:03:37.753653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.618 [2024-11-20 14:03:37.753679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:40.618 [2024-11-20 14:03:37.757500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.618 [2024-11-20 14:03:37.757639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.618 [2024-11-20 14:03:37.757663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:40.618 [2024-11-20 14:03:37.761505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.618 [2024-11-20 14:03:37.761590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.618 [2024-11-20 14:03:37.761632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:40.618 [2024-11-20 14:03:37.765137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.618 [2024-11-20 14:03:37.765464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.618 [2024-11-20 14:03:37.765497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:40.618 [2024-11-20 14:03:37.768912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.618 [2024-11-20 14:03:37.769026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.618 [2024-11-20 14:03:37.769048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:40.618 [2024-11-20 14:03:37.773000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.618 [2024-11-20 14:03:37.773075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.618 [2024-11-20 14:03:37.773100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:40.618 [2024-11-20 14:03:37.777274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.618 [2024-11-20 14:03:37.777439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.618 [2024-11-20 14:03:37.777461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:40.618 [2024-11-20 14:03:37.781225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.618 [2024-11-20 14:03:37.781332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.618 [2024-11-20 14:03:37.781352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:40.618 [2024-11-20 14:03:37.784413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.618 [2024-11-20 14:03:37.784464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.618 [2024-11-20 14:03:37.784484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:40.618 [2024-11-20 14:03:37.788379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.619 [2024-11-20 14:03:37.788458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.619 [2024-11-20 14:03:37.788483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:40.619 [2024-11-20 14:03:37.792335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.619 [2024-11-20 14:03:37.792458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.619 [2024-11-20 14:03:37.792483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:40.619 [2024-11-20 14:03:37.796305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.619 [2024-11-20 14:03:37.796375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.619 [2024-11-20 14:03:37.796399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:40.619 [2024-11-20 14:03:37.799823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.619 [2024-11-20 14:03:37.800289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.619 [2024-11-20 14:03:37.800319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:40.619 [2024-11-20 14:03:37.803624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.619 [2024-11-20 14:03:37.803792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.619 [2024-11-20 14:03:37.803818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:40.619 [2024-11-20 14:03:37.807591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.619 [2024-11-20 14:03:37.807817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.619 [2024-11-20 14:03:37.807840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:40.619 [2024-11-20 14:03:37.811546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.619 [2024-11-20 14:03:37.811637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.619 [2024-11-20 14:03:37.811662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:40.619 [2024-11-20 14:03:37.814899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.619 [2024-11-20 14:03:37.815240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.619 [2024-11-20 14:03:37.815269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:40.619 [2024-11-20 14:03:37.818669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.619 [2024-11-20 14:03:37.818830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.619 [2024-11-20 14:03:37.818851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:40.619 [2024-11-20 14:03:37.822626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.619 [2024-11-20 14:03:37.822823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.619 [2024-11-20 14:03:37.822862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:40.619 [2024-11-20 14:03:37.826679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.619 [2024-11-20 14:03:37.826800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.619 [2024-11-20 14:03:37.826823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:40.619 [2024-11-20 14:03:37.830799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.619 [2024-11-20 14:03:37.830984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.619 [2024-11-20 14:03:37.831015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:40.619 [2024-11-20 14:03:37.835240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.619 [2024-11-20 14:03:37.835411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.619 [2024-11-20 14:03:37.835441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:40.619 [2024-11-20 14:03:37.840151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.619 [2024-11-20 14:03:37.840342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.619 [2024-11-20 14:03:37.840367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:40.619 [2024-11-20 14:03:37.844280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.619 [2024-11-20 14:03:37.844485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.619 [2024-11-20 14:03:37.844510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:40.619 [2024-11-20 14:03:37.848645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.619 [2024-11-20 14:03:37.849025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.619 [2024-11-20 14:03:37.849062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:40.619 [2024-11-20 14:03:37.852452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.619 [2024-11-20 14:03:37.852576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.619 [2024-11-20 14:03:37.852617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:40.619 [2024-11-20 14:03:37.856488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.619 [2024-11-20 14:03:37.856649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.619 [2024-11-20 14:03:37.856679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:40.619 [2024-11-20 14:03:37.860425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.619 [2024-11-20 14:03:37.860538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.619 [2024-11-20 14:03:37.860566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:40.619 [2024-11-20 14:03:37.863687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.619 [2024-11-20 14:03:37.863844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.619 [2024-11-20 14:03:37.863871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:40.619 [2024-11-20 14:03:37.867548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.619 [2024-11-20 14:03:37.867635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.619 [2024-11-20 14:03:37.867664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:40.619 [2024-11-20 14:03:37.871525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.619 [2024-11-20 14:03:37.871677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.619 [2024-11-20 14:03:37.871715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:40.619 [2024-11-20 14:03:37.875490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.619 [2024-11-20 14:03:37.875560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.619 [2024-11-20 14:03:37.875587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:40.619 [2024-11-20 14:03:37.879088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.619 [2024-11-20 14:03:37.879586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.619 [2024-11-20 14:03:37.879621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:40.619 [2024-11-20 14:03:37.882899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.619 [2024-11-20 14:03:37.883057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.619 [2024-11-20 14:03:37.883084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:40.619 [2024-11-20 14:03:37.886743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.619 [2024-11-20 14:03:37.886985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.619 [2024-11-20 14:03:37.887012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:40.619 [2024-11-20 14:03:37.890645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.619 [2024-11-20 14:03:37.890760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.619 [2024-11-20 14:03:37.890802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:40.619 [2024-11-20 14:03:37.894111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.620 [2024-11-20 14:03:37.894485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.620 [2024-11-20 14:03:37.894520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:40.620 [2024-11-20 14:03:37.898159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.620 [2024-11-20 14:03:37.898233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.620 [2024-11-20 14:03:37.898264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:40.620 [2024-11-20 14:03:37.902099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.620 [2024-11-20 14:03:37.902168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.620 [2024-11-20 14:03:37.902200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:40.620 [2024-11-20 14:03:37.906276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.620 [2024-11-20 14:03:37.906372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.620 [2024-11-20 14:03:37.906409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:40.620 [2024-11-20 14:03:37.910268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.620 [2024-11-20 14:03:37.910534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.620 [2024-11-20 14:03:37.910590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:40.620 [2024-11-20 14:03:37.914098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.620 [2024-11-20 14:03:37.914520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.620 [2024-11-20 14:03:37.914563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:40.620 [2024-11-20 14:03:37.918093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.620 [2024-11-20 14:03:37.918391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.620 [2024-11-20 14:03:37.918429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:40.620 [2024-11-20 14:03:37.921986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.620 [2024-11-20 14:03:37.922063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.620 [2024-11-20 14:03:37.922096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:40.620 [2024-11-20 14:03:37.925925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.620 [2024-11-20 14:03:37.926016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.620 [2024-11-20 14:03:37.926048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:40.620 [2024-11-20 14:03:37.929490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.620 [2024-11-20 14:03:37.929881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.620 [2024-11-20 14:03:37.929914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:40.620 [2024-11-20 14:03:37.933567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.620 [2024-11-20 14:03:37.933792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.620 [2024-11-20 14:03:37.933824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:40.620 [2024-11-20 14:03:37.937437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.620 [2024-11-20 14:03:37.937496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.620 [2024-11-20 14:03:37.937528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:40.881 [2024-11-20 14:03:37.941464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.881 [2024-11-20 14:03:37.941641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.882 [2024-11-20 14:03:37.941687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:40.882 [2024-11-20 14:03:37.945971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.882 [2024-11-20 14:03:37.946141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.882 [2024-11-20 14:03:37.946176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:40.882 [2024-11-20 14:03:37.950123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.882 [2024-11-20 14:03:37.950188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.882 [2024-11-20 14:03:37.950218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:40.882 [2024-11-20 14:03:37.953837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.882 [2024-11-20 14:03:37.954327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.882 [2024-11-20 14:03:37.954363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:40.882 [2024-11-20 14:03:37.958094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.882 [2024-11-20 14:03:37.958197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.882 [2024-11-20 14:03:37.958229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:40.882 [2024-11-20 14:03:37.962412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.882 [2024-11-20 14:03:37.962574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.882 [2024-11-20 14:03:37.962624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:40.882 [2024-11-20 14:03:37.966427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.882 [2024-11-20 14:03:37.966525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.882 [2024-11-20 14:03:37.966557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:40.882 [2024-11-20 14:03:37.970183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.882 [2024-11-20 14:03:37.970682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.882 [2024-11-20 14:03:37.970723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:40.882 [2024-11-20 14:03:37.974340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.882 [2024-11-20 14:03:37.974537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.882 [2024-11-20 14:03:37.974562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:40.882 [2024-11-20 14:03:37.978418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.882 [2024-11-20 14:03:37.978584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.882 [2024-11-20 14:03:37.978607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:40.882 [2024-11-20 14:03:37.982600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.882 [2024-11-20 14:03:37.982777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.882 [2024-11-20 14:03:37.982803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:40.882 [2024-11-20 14:03:37.986656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.882 [2024-11-20 14:03:37.986730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.882 [2024-11-20 14:03:37.986755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:40.882 [2024-11-20 14:03:37.990240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.882 [2024-11-20 14:03:37.990663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.882 [2024-11-20 14:03:37.990693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:40.882 [2024-11-20 14:03:37.994013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.882 [2024-11-20 14:03:37.994131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.882 [2024-11-20 14:03:37.994154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:40.882 [2024-11-20 14:03:37.998276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.882 [2024-11-20 14:03:37.998346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.882 [2024-11-20 14:03:37.998370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:40.882 [2024-11-20 14:03:38.002170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.882 [2024-11-20 14:03:38.002338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.882 [2024-11-20 14:03:38.002361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:40.882 [2024-11-20 14:03:38.005833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.882 [2024-11-20 14:03:38.006168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.882 [2024-11-20 14:03:38.006199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:40.882 [2024-11-20 14:03:38.009642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.882 [2024-11-20 14:03:38.009787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.882 [2024-11-20 14:03:38.009810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:40.882 [2024-11-20 14:03:38.013589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.882 [2024-11-20 14:03:38.013692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.882 [2024-11-20 14:03:38.013727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:40.882 [2024-11-20 14:03:38.017578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.882 [2024-11-20 14:03:38.017742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.882 [2024-11-20 14:03:38.017782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:40.882 [2024-11-20 14:03:38.021556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.882 [2024-11-20 14:03:38.021665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.882 [2024-11-20 14:03:38.021685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:40.882 [2024-11-20 14:03:38.024817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.882 [2024-11-20 14:03:38.025030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.882 [2024-11-20 14:03:38.025052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:40.882 [2024-11-20 14:03:38.028599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.882 [2024-11-20 14:03:38.028787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.882 [2024-11-20 14:03:38.028808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:40.882 [2024-11-20 14:03:38.032558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.882 [2024-11-20 14:03:38.032648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.882 [2024-11-20 14:03:38.032669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:40.882 [2024-11-20 14:03:38.036008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.882 [2024-11-20 14:03:38.036376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.882 [2024-11-20 14:03:38.036406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:40.882 [2024-11-20 14:03:38.040050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.882 [2024-11-20 14:03:38.040192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.882 [2024-11-20 14:03:38.040214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:40.882 [2024-11-20 14:03:38.043891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.882 [2024-11-20 14:03:38.044033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.882 [2024-11-20 14:03:38.044055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:40.882 [2024-11-20 14:03:38.047884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.882 [2024-11-20 14:03:38.048039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.883 [2024-11-20 14:03:38.048061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:40.883 [2024-11-20 14:03:38.051867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.883 [2024-11-20 14:03:38.052068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.883 [2024-11-20 14:03:38.052090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:40.883 [2024-11-20 14:03:38.055403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.883 [2024-11-20 14:03:38.055783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.883 [2024-11-20 14:03:38.055817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:40.883 [2024-11-20 14:03:38.059275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.883 [2024-11-20 14:03:38.059402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.883 [2024-11-20 14:03:38.059424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:40.883 [2024-11-20 14:03:38.063315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.883 [2024-11-20 14:03:38.063409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.883 [2024-11-20 14:03:38.063432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:40.883 [2024-11-20 14:03:38.067355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.883 [2024-11-20 14:03:38.067514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.883 [2024-11-20 14:03:38.067535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:40.883 [2024-11-20 14:03:38.071309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.883 [2024-11-20 14:03:38.071418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.883 [2024-11-20 14:03:38.071439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:40.883 [2024-11-20 14:03:38.074629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.883 [2024-11-20 14:03:38.074736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.883 [2024-11-20 14:03:38.074774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:40.883 [2024-11-20 14:03:38.078839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.883 [2024-11-20 14:03:38.078958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.883 [2024-11-20 14:03:38.078981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:40.883 [2024-11-20 14:03:38.082824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.883 [2024-11-20 14:03:38.082919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.883 [2024-11-20 14:03:38.082954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:40.883 [2024-11-20 14:03:38.086844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.883 [2024-11-20 14:03:38.086969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.883 [2024-11-20 14:03:38.086994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:40.883 [2024-11-20 14:03:38.090700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.883 [2024-11-20 14:03:38.090785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.883 [2024-11-20 14:03:38.090808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:40.883 [2024-11-20 14:03:38.094156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.883 [2024-11-20 14:03:38.094626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.883 [2024-11-20 14:03:38.094657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:40.883 [2024-11-20 14:03:38.098115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.883 [2024-11-20 14:03:38.098207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.883 [2024-11-20 14:03:38.098231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:40.883 [2024-11-20 14:03:38.102391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.883 [2024-11-20 14:03:38.102532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.883 [2024-11-20 14:03:38.102554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:40.883 [2024-11-20 14:03:38.106179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.883 [2024-11-20 14:03:38.106240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.883 [2024-11-20 14:03:38.106262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:40.883 [2024-11-20 14:03:38.109570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.883 [2024-11-20 14:03:38.110028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.883 [2024-11-20 14:03:38.110059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:40.883 [2024-11-20 14:03:38.113292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.883 [2024-11-20 14:03:38.113429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.883 [2024-11-20 14:03:38.113451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:40.883 [2024-11-20 14:03:38.117274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.883 [2024-11-20 14:03:38.117372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.883 [2024-11-20 14:03:38.117395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:42:40.883 [2024-11-20 14:03:38.121215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.883 [2024-11-20 14:03:38.121287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.883 [2024-11-20 14:03:38.121312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:42:40.883 [2024-11-20 14:03:38.124484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.883 [2024-11-20 14:03:38.124570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.883 [2024-11-20 14:03:38.124593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:40.883 [2024-11-20 14:03:38.128504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15395b0) with pdu=0x200016eff3c8 00:42:40.883 [2024-11-20 14:03:38.128616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:42:40.883 [2024-11-20 14:03:38.128639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:42:40.883 7846.50 IOPS, 980.81 MiB/s 00:42:40.883 Latency(us) 00:42:40.883 [2024-11-20T14:03:38.206Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:40.883 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:42:40.883 nvme0n1 : 2.00 7842.18 980.27 0.00 0.00 2036.02 1287.83 8356.56 00:42:40.883 [2024-11-20T14:03:38.206Z] =================================================================================================================== 00:42:40.883 [2024-11-20T14:03:38.206Z] Total : 7842.18 980.27 0.00 0.00 2036.02 1287.83 8356.56 00:42:40.883 { 00:42:40.883 "results": [ 00:42:40.883 { 00:42:40.883 "job": "nvme0n1", 00:42:40.883 "core_mask": "0x2", 00:42:40.883 "workload": "randwrite", 00:42:40.883 "status": "finished", 00:42:40.883 "queue_depth": 16, 00:42:40.883 "io_size": 131072, 00:42:40.883 "runtime": 2.003398, 00:42:40.883 "iops": 7842.176142733496, 00:42:40.883 "mibps": 980.272017841687, 00:42:40.883 "io_failed": 0, 00:42:40.883 "io_timeout": 0, 00:42:40.883 "avg_latency_us": 2036.0166049487202, 00:42:40.883 "min_latency_us": 1287.825327510917, 00:42:40.883 "max_latency_us": 8356.555458515284 00:42:40.883 } 00:42:40.883 ], 00:42:40.883 "core_count": 1 00:42:40.883 } 00:42:40.883 14:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:42:40.883 14:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:42:40.883 14:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:42:40.883 14:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:42:40.883 | .driver_specific 00:42:40.883 | .nvme_error 00:42:40.883 | .status_code 00:42:40.884 | .command_transient_transport_error' 00:42:41.143 14:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 507 > 0 )) 00:42:41.143 14:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80527 00:42:41.143 14:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80527 ']' 00:42:41.143 14:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80527 00:42:41.143 14:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:42:41.143 14:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:41.143 14:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80527 00:42:41.143 killing process with pid 80527 00:42:41.143 Received shutdown signal, test time was about 2.000000 seconds 00:42:41.143 00:42:41.143 Latency(us) 00:42:41.143 [2024-11-20T14:03:38.466Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:41.143 [2024-11-20T14:03:38.466Z] =================================================================================================================== 00:42:41.143 [2024-11-20T14:03:38.466Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:41.143 14:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:42:41.143 14:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:42:41.143 14:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80527' 00:42:41.143 14:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80527 00:42:41.143 14:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80527 00:42:41.402 14:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 80320 00:42:41.402 14:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80320 ']' 00:42:41.402 14:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80320 00:42:41.402 14:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:42:41.662 14:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:41.662 14:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80320 00:42:41.662 killing process with pid 80320 00:42:41.662 14:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:41.662 14:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:41.662 14:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80320' 00:42:41.662 14:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80320 00:42:41.662 14:03:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80320 00:42:41.924 00:42:41.924 real 0m18.269s 00:42:41.924 user 0m34.490s 00:42:41.924 sys 0m5.122s 00:42:41.924 14:03:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:41.924 ************************************ 00:42:41.924 END TEST nvmf_digest_error 00:42:41.924 ************************************ 00:42:41.924 14:03:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:42:41.924 14:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:42:41.924 14:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:42:41.924 14:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:41.924 14:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:42:41.924 14:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:41.924 14:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:42:41.924 14:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:41.924 14:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:41.924 rmmod nvme_tcp 00:42:41.924 rmmod nvme_fabrics 00:42:41.924 rmmod nvme_keyring 00:42:42.183 14:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:42.183 14:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:42:42.183 14:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:42:42.183 14:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 80320 ']' 00:42:42.183 14:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 80320 00:42:42.183 14:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 80320 ']' 00:42:42.183 14:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 80320 00:42:42.183 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (80320) - No such process 00:42:42.183 Process with pid 80320 is not found 00:42:42.183 14:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 80320 is not found' 00:42:42.183 14:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:42.183 14:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:42.183 14:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:42.183 14:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:42:42.183 14:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:42:42.183 14:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:42.183 14:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:42:42.183 14:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:42.183 14:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:42:42.183 14:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:42:42.183 14:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:42:42.183 14:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:42:42.183 14:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:42:42.183 14:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:42:42.183 14:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:42:42.183 14:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:42:42.183 14:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:42:42.183 14:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:42:42.183 14:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:42:42.183 14:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:42:42.183 14:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:42:42.183 14:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:42:42.183 14:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:42:42.183 14:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:42.183 14:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:42.183 14:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:42.442 14:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:42:42.442 00:42:42.442 real 0m38.096s 00:42:42.442 user 1m9.655s 00:42:42.442 sys 0m11.081s 00:42:42.442 14:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:42.442 14:03:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:42:42.442 ************************************ 00:42:42.442 END TEST nvmf_digest 00:42:42.442 ************************************ 00:42:42.442 14:03:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:42:42.443 14:03:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:42:42.443 14:03:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:42:42.443 14:03:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:42:42.443 14:03:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:42.443 14:03:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:42:42.443 ************************************ 00:42:42.443 START TEST nvmf_host_multipath 00:42:42.443 ************************************ 00:42:42.443 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:42:42.443 * Looking for test storage... 00:42:42.443 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:42:42.443 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:42:42.443 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:42:42.443 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:42:42.702 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:42:42.702 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:42.702 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:42.702 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:42.702 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:42:42.702 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:42:42.702 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:42:42.702 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:42:42.702 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:42:42.702 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:42:42.702 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:42:42.702 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:42.702 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:42:42.702 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:42:42.702 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:42.702 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:42.702 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:42:42.702 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:42:42.702 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:42.702 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:42:42.702 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:42:42.702 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:42:42.702 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:42:42.702 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:42.702 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:42:42.702 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:42:42.702 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:42.702 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:42.702 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:42:42.702 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:42.702 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:42:42.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:42.703 --rc genhtml_branch_coverage=1 00:42:42.703 --rc genhtml_function_coverage=1 00:42:42.703 --rc genhtml_legend=1 00:42:42.703 --rc geninfo_all_blocks=1 00:42:42.703 --rc geninfo_unexecuted_blocks=1 00:42:42.703 00:42:42.703 ' 00:42:42.703 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:42:42.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:42.703 --rc genhtml_branch_coverage=1 00:42:42.703 --rc genhtml_function_coverage=1 00:42:42.703 --rc genhtml_legend=1 00:42:42.703 --rc geninfo_all_blocks=1 00:42:42.703 --rc geninfo_unexecuted_blocks=1 00:42:42.703 00:42:42.703 ' 00:42:42.703 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:42:42.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:42.703 --rc genhtml_branch_coverage=1 00:42:42.703 --rc genhtml_function_coverage=1 00:42:42.703 --rc genhtml_legend=1 00:42:42.703 --rc geninfo_all_blocks=1 00:42:42.703 --rc geninfo_unexecuted_blocks=1 00:42:42.703 00:42:42.703 ' 00:42:42.703 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:42:42.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:42.703 --rc genhtml_branch_coverage=1 00:42:42.703 --rc genhtml_function_coverage=1 00:42:42.703 --rc genhtml_legend=1 00:42:42.703 --rc geninfo_all_blocks=1 00:42:42.703 --rc geninfo_unexecuted_blocks=1 00:42:42.703 00:42:42.703 ' 00:42:42.703 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:42:42.703 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:42:42.703 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:42.703 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:42.703 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:42.703 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:42.703 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:42.703 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:42.703 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:42.703 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:42.703 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:42.703 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:42.703 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:42:42.703 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=105ec898-1662-46bd-85be-b241e399edb9 00:42:42.703 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:42.703 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:42.703 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:42:42.703 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:42.703 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:42:42.703 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:42:42.703 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:42.703 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:42.703 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:42.703 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:42.703 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:42.703 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:42.703 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:42:42.703 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:42.703 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:42:42.703 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:42.703 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:42.703 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:42.703 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:42.703 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:42.703 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:42.703 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:42.703 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:42.703 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:42.703 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:42.703 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:42:42.703 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:42:42.703 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:42:42.703 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:42:42.703 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:42:42.703 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:42:42.703 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:42:42.703 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:42.703 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:42.703 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:42.703 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:42.703 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:42.703 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:42.703 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:42.703 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:42.703 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:42:42.703 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:42:42.704 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:42:42.704 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:42:42.704 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:42:42.704 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:42:42.704 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:42.704 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:42:42.704 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:42:42.704 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:42:42.704 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:42.704 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:42:42.704 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:42:42.704 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:42:42.704 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:42:42.704 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:42:42.704 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:42:42.704 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:42.704 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:42:42.704 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:42:42.704 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:42:42.704 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:42:42.704 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:42:42.704 Cannot find device "nvmf_init_br" 00:42:42.704 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:42:42.704 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:42:42.704 Cannot find device "nvmf_init_br2" 00:42:42.704 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:42:42.704 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:42:42.704 Cannot find device "nvmf_tgt_br" 00:42:42.704 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:42:42.704 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:42:42.704 Cannot find device "nvmf_tgt_br2" 00:42:42.704 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:42:42.704 14:03:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:42:42.704 Cannot find device "nvmf_init_br" 00:42:42.704 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:42:42.704 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:42:42.704 Cannot find device "nvmf_init_br2" 00:42:42.704 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:42:42.704 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:42:42.964 Cannot find device "nvmf_tgt_br" 00:42:42.964 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:42:42.964 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:42:42.964 Cannot find device "nvmf_tgt_br2" 00:42:42.964 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:42:42.964 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:42:42.964 Cannot find device "nvmf_br" 00:42:42.964 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:42:42.964 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:42:42.964 Cannot find device "nvmf_init_if" 00:42:42.964 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:42:42.964 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:42:42.964 Cannot find device "nvmf_init_if2" 00:42:42.964 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:42:42.964 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:42:42.964 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:42:42.964 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:42:42.964 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:42:42.964 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:42:42.964 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:42:42.964 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:42:42.964 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:42:42.964 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:42:42.964 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:42:42.964 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:42:42.964 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:42:42.964 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:42:42.964 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:42:42.964 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:42:42.964 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:42:42.964 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:42:42.964 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:42:42.964 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:42:42.964 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:42:42.964 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:42:42.964 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:42:42.964 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:42:42.964 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:42:42.964 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:42:42.964 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:42:42.964 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:42:42.964 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:42:42.964 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:42:42.964 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:42:42.964 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:42:42.964 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:42:43.224 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:42:43.224 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:42:43.224 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:42:43.224 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:42:43.224 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:42:43.224 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:42:43.224 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:42:43.224 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:42:43.224 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.144 ms 00:42:43.224 00:42:43.224 --- 10.0.0.3 ping statistics --- 00:42:43.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:43.224 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:42:43.224 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:42:43.224 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:42:43.224 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.094 ms 00:42:43.224 00:42:43.224 --- 10.0.0.4 ping statistics --- 00:42:43.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:43.224 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:42:43.224 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:42:43.224 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:43.224 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:42:43.224 00:42:43.224 --- 10.0.0.1 ping statistics --- 00:42:43.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:43.224 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:42:43.224 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:42:43.224 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:43.224 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:42:43.224 00:42:43.224 --- 10.0.0.2 ping statistics --- 00:42:43.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:43.224 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:42:43.224 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:43.224 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:42:43.224 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:43.224 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:43.224 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:43.224 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:43.224 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:43.224 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:43.224 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:43.224 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:42:43.224 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:43.224 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:43.224 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:42:43.224 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=80863 00:42:43.224 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:42:43.224 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 80863 00:42:43.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:43.224 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 80863 ']' 00:42:43.224 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:43.224 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:43.224 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:43.224 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:43.224 14:03:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:42:43.224 [2024-11-20 14:03:40.443659] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:42:43.224 [2024-11-20 14:03:40.443775] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:43.484 [2024-11-20 14:03:40.578139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:42:43.484 [2024-11-20 14:03:40.659181] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:43.484 [2024-11-20 14:03:40.659256] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:43.484 [2024-11-20 14:03:40.659264] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:43.484 [2024-11-20 14:03:40.659269] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:43.484 [2024-11-20 14:03:40.659274] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:43.484 [2024-11-20 14:03:40.660629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:43.484 [2024-11-20 14:03:40.660630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:43.484 [2024-11-20 14:03:40.736862] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:42:44.420 14:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:44.420 14:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:42:44.420 14:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:44.420 14:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:44.420 14:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:42:44.420 14:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:44.420 14:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=80863 00:42:44.420 14:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:42:44.420 [2024-11-20 14:03:41.646299] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:44.420 14:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:42:44.678 Malloc0 00:42:44.678 14:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:42:44.937 14:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:42:45.196 14:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:42:45.455 [2024-11-20 14:03:42.640551] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:42:45.455 14:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:42:45.715 [2024-11-20 14:03:42.884295] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:42:45.715 14:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:42:45.715 14:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=80919 00:42:45.715 14:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:42:45.715 14:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 80919 /var/tmp/bdevperf.sock 00:42:45.715 14:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 80919 ']' 00:42:45.715 14:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:42:45.715 14:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:45.715 14:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:42:45.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:42:45.715 14:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:45.715 14:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:42:46.650 14:03:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:46.650 14:03:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:42:46.650 14:03:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:42:46.910 14:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:42:47.172 Nvme0n1 00:42:47.448 14:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:42:47.718 Nvme0n1 00:42:47.718 14:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:42:47.718 14:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:42:48.653 14:03:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:42:48.653 14:03:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:42:48.913 14:03:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:42:49.172 14:03:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:42:49.172 14:03:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80863 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:42:49.172 14:03:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80964 00:42:49.172 14:03:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:42:55.746 14:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:42:55.746 14:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:42:55.746 14:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:42:55.746 14:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:42:55.746 Attaching 4 probes... 00:42:55.746 @path[10.0.0.3, 4421]: 14027 00:42:55.746 @path[10.0.0.3, 4421]: 14447 00:42:55.746 @path[10.0.0.3, 4421]: 14321 00:42:55.746 @path[10.0.0.3, 4421]: 14322 00:42:55.746 @path[10.0.0.3, 4421]: 14450 00:42:55.746 14:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:42:55.746 14:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:42:55.746 14:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:42:55.746 14:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:42:55.746 14:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:42:55.746 14:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:42:55.746 14:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80964 00:42:55.746 14:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:42:55.746 14:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:42:55.746 14:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:42:55.746 14:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:42:56.006 14:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:42:56.006 14:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80863 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:42:56.006 14:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81072 00:42:56.006 14:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:43:02.577 14:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:43:02.577 14:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:43:02.577 14:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:43:02.577 14:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:43:02.577 Attaching 4 probes... 00:43:02.577 @path[10.0.0.3, 4420]: 17238 00:43:02.577 @path[10.0.0.3, 4420]: 18287 00:43:02.577 @path[10.0.0.3, 4420]: 18023 00:43:02.577 @path[10.0.0.3, 4420]: 18820 00:43:02.577 @path[10.0.0.3, 4420]: 18718 00:43:02.577 14:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:43:02.577 14:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:43:02.577 14:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:43:02.577 14:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:43:02.577 14:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:43:02.577 14:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:43:02.577 14:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81072 00:43:02.577 14:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:43:02.577 14:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:43:02.577 14:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:43:02.577 14:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:43:02.837 14:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:43:02.837 14:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81190 00:43:02.837 14:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80863 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:43:02.837 14:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:43:09.405 14:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:43:09.405 14:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:43:09.405 14:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:43:09.405 14:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:43:09.405 Attaching 4 probes... 00:43:09.405 @path[10.0.0.3, 4421]: 14066 00:43:09.405 @path[10.0.0.3, 4421]: 19208 00:43:09.405 @path[10.0.0.3, 4421]: 19536 00:43:09.405 @path[10.0.0.3, 4421]: 19459 00:43:09.405 @path[10.0.0.3, 4421]: 18744 00:43:09.405 14:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:43:09.405 14:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:43:09.405 14:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:43:09.405 14:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:43:09.405 14:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:43:09.405 14:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:43:09.405 14:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81190 00:43:09.405 14:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:43:09.405 14:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:43:09.405 14:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:43:09.405 14:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:43:09.405 14:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:43:09.405 14:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81308 00:43:09.405 14:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80863 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:43:09.406 14:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:43:15.982 14:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:43:15.982 14:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:43:15.982 14:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:43:15.982 14:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:43:15.982 Attaching 4 probes... 00:43:15.982 00:43:15.982 00:43:15.982 00:43:15.982 00:43:15.982 00:43:15.982 14:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:43:15.982 14:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:43:15.982 14:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:43:15.982 14:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:43:15.982 14:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:43:15.982 14:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:43:15.982 14:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81308 00:43:15.982 14:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:43:15.982 14:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:43:15.982 14:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:43:15.982 14:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:43:16.242 14:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:43:16.242 14:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80863 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:43:16.242 14:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81420 00:43:16.242 14:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:43:22.818 14:04:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:43:22.818 14:04:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:43:22.818 14:04:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:43:22.818 14:04:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:43:22.818 Attaching 4 probes... 00:43:22.818 @path[10.0.0.3, 4421]: 18071 00:43:22.818 @path[10.0.0.3, 4421]: 19632 00:43:22.818 @path[10.0.0.3, 4421]: 19872 00:43:22.818 @path[10.0.0.3, 4421]: 20910 00:43:22.818 @path[10.0.0.3, 4421]: 22790 00:43:22.818 14:04:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:43:22.818 14:04:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:43:22.818 14:04:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:43:22.818 14:04:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:43:22.818 14:04:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:43:22.818 14:04:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:43:22.818 14:04:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81420 00:43:22.818 14:04:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:43:22.818 14:04:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:43:22.818 14:04:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:43:23.756 14:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:43:23.756 14:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81538 00:43:23.756 14:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80863 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:43:23.756 14:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:43:30.334 14:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:43:30.334 14:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:43:30.334 14:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:43:30.334 14:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:43:30.334 Attaching 4 probes... 00:43:30.334 @path[10.0.0.3, 4420]: 22099 00:43:30.334 @path[10.0.0.3, 4420]: 22361 00:43:30.334 @path[10.0.0.3, 4420]: 22109 00:43:30.334 @path[10.0.0.3, 4420]: 22088 00:43:30.334 @path[10.0.0.3, 4420]: 21597 00:43:30.334 14:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:43:30.334 14:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:43:30.334 14:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:43:30.334 14:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:43:30.334 14:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:43:30.334 14:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:43:30.334 14:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81538 00:43:30.334 14:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:43:30.334 14:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:43:30.334 [2024-11-20 14:04:27.342940] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:43:30.334 14:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:43:30.334 14:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:43:36.905 14:04:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:43:36.905 14:04:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81718 00:43:36.905 14:04:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80863 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:43:36.905 14:04:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:43:43.514 14:04:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:43:43.514 14:04:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:43:43.514 14:04:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:43:43.514 14:04:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:43:43.514 Attaching 4 probes... 00:43:43.514 @path[10.0.0.3, 4421]: 21172 00:43:43.514 @path[10.0.0.3, 4421]: 21280 00:43:43.514 @path[10.0.0.3, 4421]: 21422 00:43:43.514 @path[10.0.0.3, 4421]: 20960 00:43:43.514 @path[10.0.0.3, 4421]: 21248 00:43:43.514 14:04:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:43:43.514 14:04:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:43:43.514 14:04:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:43:43.514 14:04:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:43:43.514 14:04:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:43:43.514 14:04:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:43:43.514 14:04:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81718 00:43:43.514 14:04:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:43:43.514 14:04:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 80919 00:43:43.514 14:04:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 80919 ']' 00:43:43.514 14:04:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 80919 00:43:43.514 14:04:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:43:43.514 14:04:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:43.514 14:04:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80919 00:43:43.514 14:04:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:43:43.514 killing process with pid 80919 00:43:43.514 14:04:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:43:43.514 14:04:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80919' 00:43:43.514 14:04:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 80919 00:43:43.514 14:04:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 80919 00:43:43.514 { 00:43:43.514 "results": [ 00:43:43.514 { 00:43:43.514 "job": "Nvme0n1", 00:43:43.514 "core_mask": "0x4", 00:43:43.514 "workload": "verify", 00:43:43.514 "status": "terminated", 00:43:43.514 "verify_range": { 00:43:43.514 "start": 0, 00:43:43.514 "length": 16384 00:43:43.514 }, 00:43:43.514 "queue_depth": 128, 00:43:43.514 "io_size": 4096, 00:43:43.514 "runtime": 55.085557, 00:43:43.514 "iops": 8304.245702734748, 00:43:43.514 "mibps": 32.43845977630761, 00:43:43.514 "io_failed": 0, 00:43:43.514 "io_timeout": 0, 00:43:43.514 "avg_latency_us": 15389.704212123188, 00:43:43.514 "min_latency_us": 225.3694323144105, 00:43:43.514 "max_latency_us": 7033243.388646288 00:43:43.514 } 00:43:43.514 ], 00:43:43.514 "core_count": 1 00:43:43.514 } 00:43:43.515 14:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 80919 00:43:43.515 14:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:43:43.515 [2024-11-20 14:03:42.943947] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:43:43.515 [2024-11-20 14:03:42.944055] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80919 ] 00:43:43.515 [2024-11-20 14:03:43.095547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:43.515 [2024-11-20 14:03:43.177747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:43:43.515 [2024-11-20 14:03:43.256652] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:43:43.515 Running I/O for 90 seconds... 00:43:43.515 9417.00 IOPS, 36.79 MiB/s [2024-11-20T14:04:40.838Z] 8548.00 IOPS, 33.39 MiB/s [2024-11-20T14:04:40.838Z] 8097.33 IOPS, 31.63 MiB/s [2024-11-20T14:04:40.838Z] 7876.00 IOPS, 30.77 MiB/s [2024-11-20T14:04:40.838Z] 7729.60 IOPS, 30.19 MiB/s [2024-11-20T14:04:40.838Z] 7632.67 IOPS, 29.82 MiB/s [2024-11-20T14:04:40.838Z] 7572.57 IOPS, 29.58 MiB/s [2024-11-20T14:04:40.838Z] 7540.00 IOPS, 29.45 MiB/s [2024-11-20T14:04:40.838Z] [2024-11-20 14:03:53.085443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:99456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.515 [2024-11-20 14:03:53.085536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:43:43.515 [2024-11-20 14:03:53.085596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:99464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.515 [2024-11-20 14:03:53.085609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:43:43.515 [2024-11-20 14:03:53.085627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:99472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.515 [2024-11-20 14:03:53.085638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:43:43.515 [2024-11-20 14:03:53.085656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:99480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.515 [2024-11-20 14:03:53.085667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:43:43.515 [2024-11-20 14:03:53.085683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:99488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.515 [2024-11-20 14:03:53.085694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:43:43.515 [2024-11-20 14:03:53.085721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:99496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.515 [2024-11-20 14:03:53.085732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:43:43.515 [2024-11-20 14:03:53.085750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:99504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.515 [2024-11-20 14:03:53.085760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:43:43.515 [2024-11-20 14:03:53.085776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:99512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.515 [2024-11-20 14:03:53.085786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:43:43.515 [2024-11-20 14:03:53.085802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:99520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.515 [2024-11-20 14:03:53.085813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:43:43.515 [2024-11-20 14:03:53.085829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:99528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.515 [2024-11-20 14:03:53.085874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:43:43.515 [2024-11-20 14:03:53.085890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.515 [2024-11-20 14:03:53.085901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:43:43.515 [2024-11-20 14:03:53.085918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.515 [2024-11-20 14:03:53.085928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:43:43.515 [2024-11-20 14:03:53.085945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:99552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.515 [2024-11-20 14:03:53.085956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:43:43.515 [2024-11-20 14:03:53.085973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:99560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.515 [2024-11-20 14:03:53.085983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:43:43.515 [2024-11-20 14:03:53.085999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:99568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.515 [2024-11-20 14:03:53.086009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:43:43.515 [2024-11-20 14:03:53.086026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:99576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.515 [2024-11-20 14:03:53.086036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:43:43.515 [2024-11-20 14:03:53.086053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:98944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.515 [2024-11-20 14:03:53.086064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.515 [2024-11-20 14:03:53.086081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:98952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.515 [2024-11-20 14:03:53.086092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:43.515 [2024-11-20 14:03:53.086109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:98960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.515 [2024-11-20 14:03:53.086119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:43:43.515 [2024-11-20 14:03:53.086136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:98968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.515 [2024-11-20 14:03:53.086146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:43:43.515 [2024-11-20 14:03:53.086163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.515 [2024-11-20 14:03:53.086173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:43:43.515 [2024-11-20 14:03:53.086189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:98984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.515 [2024-11-20 14:03:53.086206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:43:43.515 [2024-11-20 14:03:53.086223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:98992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.515 [2024-11-20 14:03:53.086234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:43:43.515 [2024-11-20 14:03:53.086250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:99000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.515 [2024-11-20 14:03:53.086260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:43:43.515 [2024-11-20 14:03:53.086277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:99008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.515 [2024-11-20 14:03:53.086287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:43:43.515 [2024-11-20 14:03:53.086303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.515 [2024-11-20 14:03:53.086313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:43:43.515 [2024-11-20 14:03:53.086329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:99024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.516 [2024-11-20 14:03:53.086339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:43:43.516 [2024-11-20 14:03:53.086356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:99032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.516 [2024-11-20 14:03:53.086366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:43:43.516 [2024-11-20 14:03:53.086382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.516 [2024-11-20 14:03:53.086395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:43:43.516 [2024-11-20 14:03:53.086412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:99048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.516 [2024-11-20 14:03:53.086422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:43:43.516 [2024-11-20 14:03:53.086439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:99056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.516 [2024-11-20 14:03:53.086449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:43:43.516 [2024-11-20 14:03:53.086466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:99064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.516 [2024-11-20 14:03:53.086476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:43:43.516 [2024-11-20 14:03:53.086493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:99072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.516 [2024-11-20 14:03:53.086503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:43:43.516 [2024-11-20 14:03:53.086529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:99080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.516 [2024-11-20 14:03:53.086540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:43:43.516 [2024-11-20 14:03:53.086562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.516 [2024-11-20 14:03:53.086574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:43:43.516 [2024-11-20 14:03:53.086591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:99096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.516 [2024-11-20 14:03:53.086601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:43:43.516 [2024-11-20 14:03:53.086618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:99104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.516 [2024-11-20 14:03:53.086628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:43:43.516 [2024-11-20 14:03:53.086645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:99112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.516 [2024-11-20 14:03:53.086656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:43:43.516 [2024-11-20 14:03:53.086672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:99120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.516 [2024-11-20 14:03:53.086682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:43:43.516 [2024-11-20 14:03:53.086699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:99128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.516 [2024-11-20 14:03:53.086717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:43:43.516 [2024-11-20 14:03:53.086738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:99584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.516 [2024-11-20 14:03:53.086749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:43:43.516 [2024-11-20 14:03:53.086765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:99592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.516 [2024-11-20 14:03:53.086777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:43:43.516 [2024-11-20 14:03:53.086794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:99600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.516 [2024-11-20 14:03:53.086804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:43:43.516 [2024-11-20 14:03:53.086820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:99608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.516 [2024-11-20 14:03:53.086831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:43:43.516 [2024-11-20 14:03:53.086848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:99616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.516 [2024-11-20 14:03:53.086860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:43:43.516 [2024-11-20 14:03:53.086876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:99624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.516 [2024-11-20 14:03:53.086887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:43:43.516 [2024-11-20 14:03:53.086909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:99632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.516 [2024-11-20 14:03:53.086920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:43:43.516 [2024-11-20 14:03:53.086937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:99640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.516 [2024-11-20 14:03:53.086955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:43:43.516 [2024-11-20 14:03:53.086973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.516 [2024-11-20 14:03:53.086984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:43:43.516 [2024-11-20 14:03:53.087002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:99656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.516 [2024-11-20 14:03:53.087013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:43.516 [2024-11-20 14:03:53.087030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:99664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.516 [2024-11-20 14:03:53.087040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:43:43.516 [2024-11-20 14:03:53.087057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:99672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.516 [2024-11-20 14:03:53.087067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:43:43.516 [2024-11-20 14:03:53.087083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:99680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.516 [2024-11-20 14:03:53.087094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:43:43.516 [2024-11-20 14:03:53.087111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.516 [2024-11-20 14:03:53.087121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:43:43.516 [2024-11-20 14:03:53.087138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:99136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.516 [2024-11-20 14:03:53.087148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:43:43.516 [2024-11-20 14:03:53.087165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:99144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.516 [2024-11-20 14:03:53.087175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:43:43.516 [2024-11-20 14:03:53.087192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:99152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.516 [2024-11-20 14:03:53.087202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:43:43.516 [2024-11-20 14:03:53.087219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.516 [2024-11-20 14:03:53.087229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:43:43.516 [2024-11-20 14:03:53.087246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:99168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.516 [2024-11-20 14:03:53.087261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:43:43.516 [2024-11-20 14:03:53.087278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:99176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.516 [2024-11-20 14:03:53.087288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:43:43.517 [2024-11-20 14:03:53.087305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:99184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.517 [2024-11-20 14:03:53.087317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:43:43.517 [2024-11-20 14:03:53.087333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:99192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.517 [2024-11-20 14:03:53.087344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:43:43.517 [2024-11-20 14:03:53.087361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:99696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.517 [2024-11-20 14:03:53.087372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:43:43.517 [2024-11-20 14:03:53.087389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:99704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.517 [2024-11-20 14:03:53.087400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:43:43.517 [2024-11-20 14:03:53.087420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.517 [2024-11-20 14:03:53.087431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:43:43.517 [2024-11-20 14:03:53.087448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.517 [2024-11-20 14:03:53.087460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:43:43.517 [2024-11-20 14:03:53.087476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.517 [2024-11-20 14:03:53.087486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:43:43.517 [2024-11-20 14:03:53.087503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:99736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.517 [2024-11-20 14:03:53.087514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:43:43.517 [2024-11-20 14:03:53.087530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:99744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.517 [2024-11-20 14:03:53.087541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:43:43.517 [2024-11-20 14:03:53.087558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:99752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.517 [2024-11-20 14:03:53.087570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:43:43.517 [2024-11-20 14:03:53.087586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:99760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.517 [2024-11-20 14:03:53.087602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:43:43.517 [2024-11-20 14:03:53.087619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:99768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.517 [2024-11-20 14:03:53.087630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:43:43.517 [2024-11-20 14:03:53.088440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:99776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.517 [2024-11-20 14:03:53.088470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:43:43.517 [2024-11-20 14:03:53.088493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:99784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.517 [2024-11-20 14:03:53.088504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:43:43.517 [2024-11-20 14:03:53.088521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:99792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.517 [2024-11-20 14:03:53.088533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:43:43.517 [2024-11-20 14:03:53.088550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:99800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.517 [2024-11-20 14:03:53.088561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:43:43.517 [2024-11-20 14:03:53.088579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:99808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.517 [2024-11-20 14:03:53.088591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:43:43.517 [2024-11-20 14:03:53.088608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.517 [2024-11-20 14:03:53.088618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:43:43.517 [2024-11-20 14:03:53.088635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:99824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.517 [2024-11-20 14:03:53.088646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:43:43.517 [2024-11-20 14:03:53.088663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:99832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.517 [2024-11-20 14:03:53.088677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:43:43.517 [2024-11-20 14:03:53.088694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:99200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.517 [2024-11-20 14:03:53.088704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:43:43.517 [2024-11-20 14:03:53.088734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:99208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.517 [2024-11-20 14:03:53.088746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:43.517 [2024-11-20 14:03:53.088763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:99216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.517 [2024-11-20 14:03:53.088774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:43:43.517 [2024-11-20 14:03:53.088805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:99224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.517 [2024-11-20 14:03:53.088816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:43:43.517 [2024-11-20 14:03:53.088833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:99232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.517 [2024-11-20 14:03:53.088844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:43:43.517 [2024-11-20 14:03:53.088860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:99240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.517 [2024-11-20 14:03:53.088871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:43:43.517 [2024-11-20 14:03:53.088888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:99248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.517 [2024-11-20 14:03:53.088898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:43:43.517 [2024-11-20 14:03:53.088915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:99256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.517 [2024-11-20 14:03:53.088926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:43:43.517 [2024-11-20 14:03:53.090498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.517 [2024-11-20 14:03:53.090528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:43:43.517 [2024-11-20 14:03:53.090551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:99848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.517 [2024-11-20 14:03:53.090563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:43:43.517 [2024-11-20 14:03:53.090580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:99856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.517 [2024-11-20 14:03:53.090591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:43:43.517 [2024-11-20 14:03:53.090608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:99864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.517 [2024-11-20 14:03:53.090619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:43:43.517 [2024-11-20 14:03:53.090637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:99872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.518 [2024-11-20 14:03:53.090649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:43:43.518 [2024-11-20 14:03:53.090665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:99880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.518 [2024-11-20 14:03:53.090676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:43:43.518 [2024-11-20 14:03:53.090692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:99888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.518 [2024-11-20 14:03:53.090703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:43:43.518 [2024-11-20 14:03:53.090738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:99896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.518 [2024-11-20 14:03:53.090755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:43:43.518 [2024-11-20 14:03:53.090773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:99264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.518 [2024-11-20 14:03:53.090783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:43:43.518 [2024-11-20 14:03:53.090800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:99272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.518 [2024-11-20 14:03:53.090811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:43:43.518 [2024-11-20 14:03:53.090828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:99280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.518 [2024-11-20 14:03:53.090838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:43.518 [2024-11-20 14:03:53.090855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:99288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.518 [2024-11-20 14:03:53.090866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:43:43.518 [2024-11-20 14:03:53.090882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:99296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.518 [2024-11-20 14:03:53.090893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:43:43.518 [2024-11-20 14:03:53.090910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.518 [2024-11-20 14:03:53.090920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:43:43.518 [2024-11-20 14:03:53.090937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.518 [2024-11-20 14:03:53.090958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:43:43.518 [2024-11-20 14:03:53.090975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:99320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.518 [2024-11-20 14:03:53.090986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:43:43.518 [2024-11-20 14:03:53.091003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:99328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.518 [2024-11-20 14:03:53.091014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:43:43.518 [2024-11-20 14:03:53.091030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:99336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.518 [2024-11-20 14:03:53.091041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:43:43.518 [2024-11-20 14:03:53.091057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:99344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.518 [2024-11-20 14:03:53.091067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:43:43.518 [2024-11-20 14:03:53.091085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:99352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.518 [2024-11-20 14:03:53.091101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:43:43.518 [2024-11-20 14:03:53.091118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:99360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.518 [2024-11-20 14:03:53.091129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:43:43.518 [2024-11-20 14:03:53.091145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:99368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.518 [2024-11-20 14:03:53.091155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:43:43.518 [2024-11-20 14:03:53.091172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.518 [2024-11-20 14:03:53.091183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:43:43.518 [2024-11-20 14:03:53.091200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.518 [2024-11-20 14:03:53.091212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:43:43.518 [2024-11-20 14:03:53.091229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:99392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.518 [2024-11-20 14:03:53.091240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:43:43.518 [2024-11-20 14:03:53.091256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:99400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.518 [2024-11-20 14:03:53.091267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:43.518 [2024-11-20 14:03:53.091284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:99408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.518 [2024-11-20 14:03:53.091295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:43:43.518 [2024-11-20 14:03:53.091312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:99416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.518 [2024-11-20 14:03:53.091322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:43:43.518 [2024-11-20 14:03:53.091338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:99424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.518 [2024-11-20 14:03:53.091349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:43:43.518 [2024-11-20 14:03:53.091366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:99432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.518 [2024-11-20 14:03:53.091376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:43:43.518 [2024-11-20 14:03:53.091393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:99440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.518 [2024-11-20 14:03:53.091403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:43:43.518 [2024-11-20 14:03:53.091420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:99448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.518 [2024-11-20 14:03:53.091439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:43:43.518 [2024-11-20 14:03:53.091456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:99904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.518 [2024-11-20 14:03:53.091467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:43:43.518 [2024-11-20 14:03:53.091484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:99912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.518 [2024-11-20 14:03:53.091494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:43:43.518 [2024-11-20 14:03:53.091511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:99920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.518 [2024-11-20 14:03:53.091521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:43:43.518 [2024-11-20 14:03:53.091538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:99928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.518 [2024-11-20 14:03:53.091553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:43:43.518 [2024-11-20 14:03:53.091571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.518 [2024-11-20 14:03:53.091581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:43:43.518 [2024-11-20 14:03:53.091599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.518 [2024-11-20 14:03:53.091610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:43:43.518 [2024-11-20 14:03:53.091627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.518 [2024-11-20 14:03:53.091638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:43:43.519 [2024-11-20 14:03:53.091672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:99960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.519 [2024-11-20 14:03:53.091685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:43:43.519 7604.44 IOPS, 29.70 MiB/s [2024-11-20T14:04:40.842Z] 7736.00 IOPS, 30.22 MiB/s [2024-11-20T14:04:40.842Z] 7837.82 IOPS, 30.62 MiB/s [2024-11-20T14:04:40.842Z] 7966.00 IOPS, 31.12 MiB/s [2024-11-20T14:04:40.842Z] 8076.92 IOPS, 31.55 MiB/s [2024-11-20T14:04:40.842Z] 8140.57 IOPS, 31.80 MiB/s [2024-11-20T14:04:40.842Z] [2024-11-20 14:03:59.669866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:42432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.519 [2024-11-20 14:03:59.669948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:43:43.519 [2024-11-20 14:03:59.670003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:42440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.519 [2024-11-20 14:03:59.670017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:43:43.519 [2024-11-20 14:03:59.670036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:42448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.519 [2024-11-20 14:03:59.670047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:43:43.519 [2024-11-20 14:03:59.670093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:42456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.519 [2024-11-20 14:03:59.670105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:43:43.519 [2024-11-20 14:03:59.670123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:42464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.519 [2024-11-20 14:03:59.670134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:43:43.519 [2024-11-20 14:03:59.670151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:42472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.519 [2024-11-20 14:03:59.670163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:43.519 [2024-11-20 14:03:59.670181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:42480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.519 [2024-11-20 14:03:59.670192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:43:43.519 [2024-11-20 14:03:59.670209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:42488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.519 [2024-11-20 14:03:59.670220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:43:43.519 [2024-11-20 14:03:59.671088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:42496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.519 [2024-11-20 14:03:59.671118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:43:43.519 [2024-11-20 14:03:59.671140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:42504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.519 [2024-11-20 14:03:59.671154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:43:43.519 [2024-11-20 14:03:59.671173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:42512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.519 [2024-11-20 14:03:59.671187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:43:43.519 [2024-11-20 14:03:59.671206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:42520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.519 [2024-11-20 14:03:59.671219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:43:43.519 [2024-11-20 14:03:59.671238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:42528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.519 [2024-11-20 14:03:59.671249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:43:43.519 [2024-11-20 14:03:59.671269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:42536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.519 [2024-11-20 14:03:59.671281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:43:43.519 [2024-11-20 14:03:59.671300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:42544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.519 [2024-11-20 14:03:59.671312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:43:43.519 [2024-11-20 14:03:59.671332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:42552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.519 [2024-11-20 14:03:59.671360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:43:43.519 [2024-11-20 14:03:59.671380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:41856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.519 [2024-11-20 14:03:59.671392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:43:43.519 [2024-11-20 14:03:59.671415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:41864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.519 [2024-11-20 14:03:59.671428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:43:43.519 [2024-11-20 14:03:59.671448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:41872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.519 [2024-11-20 14:03:59.671460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:43:43.519 [2024-11-20 14:03:59.671480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:41880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.519 [2024-11-20 14:03:59.671493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:43:43.519 [2024-11-20 14:03:59.671513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:41888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.519 [2024-11-20 14:03:59.671524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:43:43.519 [2024-11-20 14:03:59.671542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:41896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.519 [2024-11-20 14:03:59.671554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:43:43.519 [2024-11-20 14:03:59.671572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.519 [2024-11-20 14:03:59.671583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:43.519 [2024-11-20 14:03:59.671601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:41912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.519 [2024-11-20 14:03:59.671613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:43:43.519 [2024-11-20 14:03:59.671632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:41920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.519 [2024-11-20 14:03:59.671644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:43:43.519 [2024-11-20 14:03:59.671662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:41928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.519 [2024-11-20 14:03:59.671674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:43:43.520 [2024-11-20 14:03:59.671693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:41936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.520 [2024-11-20 14:03:59.671704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:43:43.520 [2024-11-20 14:03:59.671733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:41944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.520 [2024-11-20 14:03:59.671750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:43:43.520 [2024-11-20 14:03:59.671769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:41952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.520 [2024-11-20 14:03:59.671781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:43:43.520 [2024-11-20 14:03:59.671801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:41960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.520 [2024-11-20 14:03:59.671812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:43:43.520 [2024-11-20 14:03:59.671831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:41968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.520 [2024-11-20 14:03:59.671842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:43:43.520 [2024-11-20 14:03:59.671861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:41976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.520 [2024-11-20 14:03:59.671873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:43:43.520 [2024-11-20 14:03:59.671891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:41984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.520 [2024-11-20 14:03:59.671903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:43:43.520 [2024-11-20 14:03:59.671923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:41992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.520 [2024-11-20 14:03:59.671936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:43:43.520 [2024-11-20 14:03:59.671954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:42000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.520 [2024-11-20 14:03:59.671966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:43:43.520 [2024-11-20 14:03:59.671984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:42008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.520 [2024-11-20 14:03:59.671995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:43:43.520 [2024-11-20 14:03:59.672014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:42016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.520 [2024-11-20 14:03:59.672025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:43:43.520 [2024-11-20 14:03:59.672044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:42024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.520 [2024-11-20 14:03:59.672056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:43.520 [2024-11-20 14:03:59.672073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:42032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.520 [2024-11-20 14:03:59.672085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:43:43.520 [2024-11-20 14:03:59.672104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:42040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.520 [2024-11-20 14:03:59.672115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:43:43.520 [2024-11-20 14:03:59.672229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:42560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.520 [2024-11-20 14:03:59.672245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:43:43.520 [2024-11-20 14:03:59.672267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:42568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.520 [2024-11-20 14:03:59.672278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:43:43.520 [2024-11-20 14:03:59.672299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:42576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.520 [2024-11-20 14:03:59.672310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:43:43.520 [2024-11-20 14:03:59.672331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:42584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.520 [2024-11-20 14:03:59.672343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:43:43.520 [2024-11-20 14:03:59.672364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.520 [2024-11-20 14:03:59.672375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:43:43.520 [2024-11-20 14:03:59.672397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:42600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.520 [2024-11-20 14:03:59.672408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:43:43.520 [2024-11-20 14:03:59.672429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:42608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.520 [2024-11-20 14:03:59.672441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:43:43.520 [2024-11-20 14:03:59.672461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:42616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.520 [2024-11-20 14:03:59.672473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:43:43.520 [2024-11-20 14:03:59.672492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:42048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.520 [2024-11-20 14:03:59.672504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:43:43.520 [2024-11-20 14:03:59.672524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:42056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.520 [2024-11-20 14:03:59.672536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:43:43.520 [2024-11-20 14:03:59.672557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:42064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.520 [2024-11-20 14:03:59.672569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:43:43.520 [2024-11-20 14:03:59.672589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:42072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.520 [2024-11-20 14:03:59.672601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:43:43.520 [2024-11-20 14:03:59.672626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:42080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.520 [2024-11-20 14:03:59.672639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:43:43.520 [2024-11-20 14:03:59.672661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:42088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.520 [2024-11-20 14:03:59.672673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:43:43.520 [2024-11-20 14:03:59.672694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:42096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.520 [2024-11-20 14:03:59.672740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:43:43.520 [2024-11-20 14:03:59.672763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:42104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.520 [2024-11-20 14:03:59.672776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:43:43.520 [2024-11-20 14:03:59.672798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:42112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.520 [2024-11-20 14:03:59.672810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:43:43.520 [2024-11-20 14:03:59.672830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:42120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.520 [2024-11-20 14:03:59.672842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:43:43.520 [2024-11-20 14:03:59.672862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:42128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.520 [2024-11-20 14:03:59.672874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:43:43.520 [2024-11-20 14:03:59.672896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:42136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.521 [2024-11-20 14:03:59.672908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:43:43.521 [2024-11-20 14:03:59.672928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:42144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.521 [2024-11-20 14:03:59.672940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:43:43.521 [2024-11-20 14:03:59.672961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:42152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.521 [2024-11-20 14:03:59.672974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:43:43.521 [2024-11-20 14:03:59.672995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:42160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.521 [2024-11-20 14:03:59.673007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:43:43.521 [2024-11-20 14:03:59.673028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:42168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.521 [2024-11-20 14:03:59.673039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:43:43.521 [2024-11-20 14:03:59.673269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:42624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.521 [2024-11-20 14:03:59.673312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:43:43.521 [2024-11-20 14:03:59.673336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:42632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.521 [2024-11-20 14:03:59.673347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:43:43.521 [2024-11-20 14:03:59.673370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:42640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.521 [2024-11-20 14:03:59.673382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:43:43.521 [2024-11-20 14:03:59.673404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:42648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.521 [2024-11-20 14:03:59.673417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:43:43.521 [2024-11-20 14:03:59.673438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:42656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.521 [2024-11-20 14:03:59.673450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.521 [2024-11-20 14:03:59.673472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:42664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.521 [2024-11-20 14:03:59.673485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:43.521 [2024-11-20 14:03:59.673507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:42672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.521 [2024-11-20 14:03:59.673520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:43:43.521 [2024-11-20 14:03:59.673541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:42680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.521 [2024-11-20 14:03:59.673552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:43:43.521 [2024-11-20 14:03:59.673575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:42176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.521 [2024-11-20 14:03:59.673585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:43:43.521 [2024-11-20 14:03:59.673608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:42184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.521 [2024-11-20 14:03:59.673619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:43:43.521 [2024-11-20 14:03:59.673642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:42192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.521 [2024-11-20 14:03:59.673653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:43:43.521 [2024-11-20 14:03:59.673675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:42200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.521 [2024-11-20 14:03:59.673686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:43:43.521 [2024-11-20 14:03:59.673718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:42208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.521 [2024-11-20 14:03:59.673740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:43:43.521 [2024-11-20 14:03:59.673763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:42216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.521 [2024-11-20 14:03:59.673774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:43:43.521 [2024-11-20 14:03:59.673797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:42224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.521 [2024-11-20 14:03:59.673808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:43:43.521 [2024-11-20 14:03:59.673830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:42232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.521 [2024-11-20 14:03:59.673842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:43:43.521 [2024-11-20 14:03:59.673865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:42240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.521 [2024-11-20 14:03:59.673876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:43:43.521 [2024-11-20 14:03:59.673898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:42248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.521 [2024-11-20 14:03:59.673910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:43:43.521 [2024-11-20 14:03:59.673932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:42256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.521 [2024-11-20 14:03:59.673944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:43:43.521 [2024-11-20 14:03:59.673965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:42264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.521 [2024-11-20 14:03:59.673977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:43:43.521 [2024-11-20 14:03:59.673999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:42272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.521 [2024-11-20 14:03:59.674010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:43:43.521 [2024-11-20 14:03:59.674032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:42280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.521 [2024-11-20 14:03:59.674044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:43:43.521 [2024-11-20 14:03:59.674066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:42288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.521 [2024-11-20 14:03:59.674078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:43:43.521 [2024-11-20 14:03:59.674099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:42296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.521 [2024-11-20 14:03:59.674111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:43:43.521 [2024-11-20 14:03:59.674387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:42688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.521 [2024-11-20 14:03:59.674403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:43:43.521 [2024-11-20 14:03:59.674435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:42696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.521 [2024-11-20 14:03:59.674448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:43:43.521 [2024-11-20 14:03:59.674471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:42704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.521 [2024-11-20 14:03:59.674483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:43:43.521 [2024-11-20 14:03:59.674507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:42712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.522 [2024-11-20 14:03:59.674519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:43:43.522 [2024-11-20 14:03:59.674543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:42720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.522 [2024-11-20 14:03:59.674554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:43:43.522 [2024-11-20 14:03:59.674578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:42728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.522 [2024-11-20 14:03:59.674591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:43:43.522 [2024-11-20 14:03:59.674615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:42736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.522 [2024-11-20 14:03:59.674628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:43:43.522 [2024-11-20 14:03:59.674651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:42744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.522 [2024-11-20 14:03:59.674663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:43:43.522 [2024-11-20 14:03:59.674686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:42752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.522 [2024-11-20 14:03:59.674699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:43:43.522 [2024-11-20 14:03:59.674737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:42760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.522 [2024-11-20 14:03:59.674749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:43:43.522 [2024-11-20 14:03:59.674772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:42768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.522 [2024-11-20 14:03:59.674784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:43:43.522 [2024-11-20 14:03:59.674808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:42776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.522 [2024-11-20 14:03:59.674820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:43:43.522 [2024-11-20 14:03:59.674842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:42784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.522 [2024-11-20 14:03:59.674854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:43:43.522 [2024-11-20 14:03:59.674884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:42792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.522 [2024-11-20 14:03:59.674895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:43.522 [2024-11-20 14:03:59.674919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:42800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.522 [2024-11-20 14:03:59.674930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:43:43.522 [2024-11-20 14:03:59.674962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:42808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.522 [2024-11-20 14:03:59.674975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:43:43.522 [2024-11-20 14:03:59.674997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:42304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.522 [2024-11-20 14:03:59.675010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:43:43.522 [2024-11-20 14:03:59.675033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:42312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.522 [2024-11-20 14:03:59.675045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:43:43.522 [2024-11-20 14:03:59.675068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:42320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.522 [2024-11-20 14:03:59.675080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:43:43.522 [2024-11-20 14:03:59.675103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:42328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.522 [2024-11-20 14:03:59.675115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:43:43.522 [2024-11-20 14:03:59.675139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:42336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.522 [2024-11-20 14:03:59.675151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:43:43.522 [2024-11-20 14:03:59.675174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:42344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.522 [2024-11-20 14:03:59.675186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:43:43.522 [2024-11-20 14:03:59.675209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:42352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.522 [2024-11-20 14:03:59.675221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:43:43.522 [2024-11-20 14:03:59.675244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:42360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.522 [2024-11-20 14:03:59.675255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:43:43.522 [2024-11-20 14:03:59.675280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:42816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.522 [2024-11-20 14:03:59.675292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:43:43.522 [2024-11-20 14:03:59.675316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:42824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.522 [2024-11-20 14:03:59.675334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:43:43.522 [2024-11-20 14:03:59.675357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:42832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.522 [2024-11-20 14:03:59.675371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:43:43.522 [2024-11-20 14:03:59.675394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:42840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.522 [2024-11-20 14:03:59.675405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:43:43.522 [2024-11-20 14:03:59.675429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:42848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.522 [2024-11-20 14:03:59.675441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:43:43.522 [2024-11-20 14:03:59.675465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:42856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.522 [2024-11-20 14:03:59.675477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:43:43.522 [2024-11-20 14:03:59.675499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:42864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.522 [2024-11-20 14:03:59.675511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:43:43.522 [2024-11-20 14:03:59.675534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:42872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.522 [2024-11-20 14:03:59.675545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:43:43.522 [2024-11-20 14:03:59.675570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:42368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.522 [2024-11-20 14:03:59.675582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:43:43.522 [2024-11-20 14:03:59.675605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:42376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.522 [2024-11-20 14:03:59.675616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:43:43.522 [2024-11-20 14:03:59.675639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:42384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.522 [2024-11-20 14:03:59.675651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:43:43.522 [2024-11-20 14:03:59.675675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:42392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.522 [2024-11-20 14:03:59.675687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:43:43.522 [2024-11-20 14:03:59.675718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:42400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.522 [2024-11-20 14:03:59.675731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:43:43.522 [2024-11-20 14:03:59.675755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:42408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.523 [2024-11-20 14:03:59.675773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:43:43.523 [2024-11-20 14:03:59.675796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:42416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.523 [2024-11-20 14:03:59.675808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:43:43.523 [2024-11-20 14:03:59.675832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:42424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.523 [2024-11-20 14:03:59.675851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:43:43.523 7994.67 IOPS, 31.23 MiB/s [2024-11-20T14:04:40.846Z] 7625.88 IOPS, 29.79 MiB/s [2024-11-20T14:04:40.846Z] 7716.47 IOPS, 30.14 MiB/s [2024-11-20T14:04:40.846Z] 7823.89 IOPS, 30.56 MiB/s [2024-11-20T14:04:40.846Z] 7923.26 IOPS, 30.95 MiB/s [2024-11-20T14:04:40.846Z] 8000.70 IOPS, 31.25 MiB/s [2024-11-20T14:04:40.846Z] 8077.62 IOPS, 31.55 MiB/s [2024-11-20T14:04:40.846Z] [2024-11-20 14:04:06.597153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.523 [2024-11-20 14:04:06.597236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:43:43.523 [2024-11-20 14:04:06.597292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.523 [2024-11-20 14:04:06.597307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:43:43.523 [2024-11-20 14:04:06.597325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.523 [2024-11-20 14:04:06.597337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:43:43.523 [2024-11-20 14:04:06.597365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.523 [2024-11-20 14:04:06.597385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:43:43.523 [2024-11-20 14:04:06.597408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.523 [2024-11-20 14:04:06.597419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:43:43.523 [2024-11-20 14:04:06.597437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.523 [2024-11-20 14:04:06.597448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:43:43.523 [2024-11-20 14:04:06.597467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.523 [2024-11-20 14:04:06.597478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:43:43.523 [2024-11-20 14:04:06.597496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.523 [2024-11-20 14:04:06.597507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:43:43.523 [2024-11-20 14:04:06.597524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.523 [2024-11-20 14:04:06.597536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:43:43.523 [2024-11-20 14:04:06.597585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.523 [2024-11-20 14:04:06.597597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:43:43.523 [2024-11-20 14:04:06.597614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.523 [2024-11-20 14:04:06.597625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:43:43.523 [2024-11-20 14:04:06.597642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.523 [2024-11-20 14:04:06.597656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:43:43.523 [2024-11-20 14:04:06.597674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.523 [2024-11-20 14:04:06.597685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.523 [2024-11-20 14:04:06.597703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.523 [2024-11-20 14:04:06.597727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:43.523 [2024-11-20 14:04:06.597746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.523 [2024-11-20 14:04:06.597758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:43:43.523 [2024-11-20 14:04:06.597775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.523 [2024-11-20 14:04:06.597788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:43:43.523 [2024-11-20 14:04:06.597808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.523 [2024-11-20 14:04:06.597820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:43:43.523 [2024-11-20 14:04:06.597840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:98360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.523 [2024-11-20 14:04:06.597851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:43:43.523 [2024-11-20 14:04:06.597869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:98368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.523 [2024-11-20 14:04:06.597879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:43:43.523 [2024-11-20 14:04:06.597897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:98376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.523 [2024-11-20 14:04:06.597908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:43:43.523 [2024-11-20 14:04:06.597925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:98384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.523 [2024-11-20 14:04:06.597936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:43:43.523 [2024-11-20 14:04:06.597954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:98392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.523 [2024-11-20 14:04:06.597972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:43:43.523 [2024-11-20 14:04:06.597990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:98400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.523 [2024-11-20 14:04:06.598001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:43:43.523 [2024-11-20 14:04:06.598018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:98408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.523 [2024-11-20 14:04:06.598029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:43:43.523 [2024-11-20 14:04:06.598299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.523 [2024-11-20 14:04:06.598316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:43:43.523 [2024-11-20 14:04:06.598334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.523 [2024-11-20 14:04:06.598347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:43:43.523 [2024-11-20 14:04:06.598365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.523 [2024-11-20 14:04:06.598376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:43:43.524 [2024-11-20 14:04:06.598395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.524 [2024-11-20 14:04:06.598407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:43:43.524 [2024-11-20 14:04:06.598426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.524 [2024-11-20 14:04:06.598438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:43:43.524 [2024-11-20 14:04:06.598456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.524 [2024-11-20 14:04:06.598469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:43:43.524 [2024-11-20 14:04:06.598487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.524 [2024-11-20 14:04:06.598499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:43:43.524 [2024-11-20 14:04:06.598518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.524 [2024-11-20 14:04:06.598539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:43:43.524 [2024-11-20 14:04:06.598557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.524 [2024-11-20 14:04:06.598568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:43:43.524 [2024-11-20 14:04:06.598586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.524 [2024-11-20 14:04:06.598605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:43:43.524 [2024-11-20 14:04:06.598623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.524 [2024-11-20 14:04:06.598635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:43:43.524 [2024-11-20 14:04:06.598681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.524 [2024-11-20 14:04:06.598693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:43:43.524 [2024-11-20 14:04:06.598710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.524 [2024-11-20 14:04:06.598720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:43:43.524 [2024-11-20 14:04:06.598737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.524 [2024-11-20 14:04:06.598762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:43:43.524 [2024-11-20 14:04:06.598779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.524 [2024-11-20 14:04:06.598790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:43:43.524 [2024-11-20 14:04:06.598807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.524 [2024-11-20 14:04:06.598818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:43:43.524 [2024-11-20 14:04:06.598836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:98416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.524 [2024-11-20 14:04:06.598847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:43:43.524 [2024-11-20 14:04:06.598865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:98424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.524 [2024-11-20 14:04:06.598875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:43:43.524 [2024-11-20 14:04:06.598893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:98432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.524 [2024-11-20 14:04:06.598904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:43:43.524 [2024-11-20 14:04:06.598921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:98440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.524 [2024-11-20 14:04:06.598932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:43:43.524 [2024-11-20 14:04:06.598949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:98448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.524 [2024-11-20 14:04:06.598971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:43:43.524 [2024-11-20 14:04:06.599005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:98456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.524 [2024-11-20 14:04:06.599017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:43.524 [2024-11-20 14:04:06.599047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:98464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.524 [2024-11-20 14:04:06.599059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:43:43.524 [2024-11-20 14:04:06.599078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:98472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.524 [2024-11-20 14:04:06.599090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:43:43.524 [2024-11-20 14:04:06.599127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.524 [2024-11-20 14:04:06.599141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:43:43.524 [2024-11-20 14:04:06.599161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.524 [2024-11-20 14:04:06.599173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:43:43.524 [2024-11-20 14:04:06.599192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.524 [2024-11-20 14:04:06.599204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:43:43.524 [2024-11-20 14:04:06.599222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.524 [2024-11-20 14:04:06.599234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:43:43.524 [2024-11-20 14:04:06.599252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.524 [2024-11-20 14:04:06.599264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:43:43.524 [2024-11-20 14:04:06.599282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.524 [2024-11-20 14:04:06.599294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:43:43.524 [2024-11-20 14:04:06.599313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.524 [2024-11-20 14:04:06.599324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:43:43.524 [2024-11-20 14:04:06.599342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.524 [2024-11-20 14:04:06.599354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:43:43.524 [2024-11-20 14:04:06.599372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.524 [2024-11-20 14:04:06.599384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:43:43.524 [2024-11-20 14:04:06.599403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.524 [2024-11-20 14:04:06.599415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:43:43.524 [2024-11-20 14:04:06.599439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.524 [2024-11-20 14:04:06.599451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:43:43.525 [2024-11-20 14:04:06.599470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.525 [2024-11-20 14:04:06.599481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:43:43.525 [2024-11-20 14:04:06.599500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.525 [2024-11-20 14:04:06.599512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:43:43.525 [2024-11-20 14:04:06.599530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.525 [2024-11-20 14:04:06.599542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:43:43.525 [2024-11-20 14:04:06.599560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.525 [2024-11-20 14:04:06.599575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:43:43.525 [2024-11-20 14:04:06.599593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.525 [2024-11-20 14:04:06.599606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:43:43.525 [2024-11-20 14:04:06.599624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.525 [2024-11-20 14:04:06.599635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:43:43.525 [2024-11-20 14:04:06.599655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.525 [2024-11-20 14:04:06.599666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:43:43.525 [2024-11-20 14:04:06.599686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.525 [2024-11-20 14:04:06.599697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:43:43.525 [2024-11-20 14:04:06.599715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.525 [2024-11-20 14:04:06.599737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:43:43.525 [2024-11-20 14:04:06.599756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.525 [2024-11-20 14:04:06.599767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:43:43.525 [2024-11-20 14:04:06.599785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:98488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.525 [2024-11-20 14:04:06.599798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:43:43.525 [2024-11-20 14:04:06.599816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.525 [2024-11-20 14:04:06.599834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:43:43.525 [2024-11-20 14:04:06.599853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:98504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.525 [2024-11-20 14:04:06.599864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:43:43.525 [2024-11-20 14:04:06.599884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:98512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.525 [2024-11-20 14:04:06.599895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:43:43.525 [2024-11-20 14:04:06.599913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:98520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.525 [2024-11-20 14:04:06.599925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:43:43.525 [2024-11-20 14:04:06.599943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.525 [2024-11-20 14:04:06.599954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:43:43.525 [2024-11-20 14:04:06.599973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.525 [2024-11-20 14:04:06.599987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:43:43.525 [2024-11-20 14:04:06.600006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.525 [2024-11-20 14:04:06.600017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:43:43.525 [2024-11-20 14:04:06.600036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.525 [2024-11-20 14:04:06.600047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:43.525 [2024-11-20 14:04:06.600066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.525 [2024-11-20 14:04:06.600079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:43:43.525 [2024-11-20 14:04:06.600098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.525 [2024-11-20 14:04:06.600109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:43:43.525 [2024-11-20 14:04:06.600127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.525 [2024-11-20 14:04:06.600139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:43:43.525 [2024-11-20 14:04:06.600158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.525 [2024-11-20 14:04:06.600171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:43:43.525 [2024-11-20 14:04:06.600189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.525 [2024-11-20 14:04:06.600206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:43:43.525 [2024-11-20 14:04:06.600225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.525 [2024-11-20 14:04:06.600237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:43:43.525 [2024-11-20 14:04:06.600265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.525 [2024-11-20 14:04:06.600293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:43:43.525 [2024-11-20 14:04:06.600313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.525 [2024-11-20 14:04:06.600324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:43:43.526 [2024-11-20 14:04:06.600342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.526 [2024-11-20 14:04:06.600354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:43:43.526 [2024-11-20 14:04:06.600372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.526 [2024-11-20 14:04:06.600384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:43:43.526 [2024-11-20 14:04:06.600407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.526 [2024-11-20 14:04:06.600418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:43:43.526 [2024-11-20 14:04:06.600436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.526 [2024-11-20 14:04:06.600447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:43:43.526 [2024-11-20 14:04:06.600466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.526 [2024-11-20 14:04:06.600477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:43:43.526 [2024-11-20 14:04:06.600496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.526 [2024-11-20 14:04:06.600508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:43:43.526 [2024-11-20 14:04:06.600529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:99152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.526 [2024-11-20 14:04:06.600541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:43:43.526 [2024-11-20 14:04:06.600560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:99160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.526 [2024-11-20 14:04:06.600572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:43:43.526 [2024-11-20 14:04:06.600590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:98544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.526 [2024-11-20 14:04:06.600613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:43.526 [2024-11-20 14:04:06.600638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:98552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.526 [2024-11-20 14:04:06.600648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:43:43.526 [2024-11-20 14:04:06.600666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:98560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.526 [2024-11-20 14:04:06.600676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:43:43.526 [2024-11-20 14:04:06.600702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:98568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.526 [2024-11-20 14:04:06.600762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:43:43.526 [2024-11-20 14:04:06.600779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:98576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.526 [2024-11-20 14:04:06.600789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:43:43.526 [2024-11-20 14:04:06.600806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:98584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.526 [2024-11-20 14:04:06.600816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:43:43.526 [2024-11-20 14:04:06.600834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:98592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.526 [2024-11-20 14:04:06.600845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:43:43.526 [2024-11-20 14:04:06.601652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:98600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.526 [2024-11-20 14:04:06.601677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:43:43.526 [2024-11-20 14:04:06.601705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:99168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.526 [2024-11-20 14:04:06.601718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:43:43.526 [2024-11-20 14:04:06.601760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:99176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.526 [2024-11-20 14:04:06.601774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:43:43.526 [2024-11-20 14:04:06.601800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:99184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.526 [2024-11-20 14:04:06.601813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:43:43.526 [2024-11-20 14:04:06.601837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.526 [2024-11-20 14:04:06.601849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:43:43.526 [2024-11-20 14:04:06.601873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:99200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.526 [2024-11-20 14:04:06.601885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:43:43.526 [2024-11-20 14:04:06.601922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:99208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.526 [2024-11-20 14:04:06.601935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:43:43.526 [2024-11-20 14:04:06.601960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:99216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.526 [2024-11-20 14:04:06.601971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:43:43.526 [2024-11-20 14:04:06.602063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:99224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.526 [2024-11-20 14:04:06.602079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:43.526 [2024-11-20 14:04:06.602105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:99232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.526 [2024-11-20 14:04:06.602118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:43:43.526 [2024-11-20 14:04:06.602143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.526 [2024-11-20 14:04:06.602155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:43:43.526 [2024-11-20 14:04:06.602179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:99248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.526 [2024-11-20 14:04:06.602192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:43:43.526 [2024-11-20 14:04:06.602221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:99256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.526 [2024-11-20 14:04:06.602233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:43:43.526 [2024-11-20 14:04:06.602258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:99264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.526 [2024-11-20 14:04:06.602271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:43:43.526 [2024-11-20 14:04:06.602295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:99272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.526 [2024-11-20 14:04:06.602307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:43:43.526 [2024-11-20 14:04:06.602332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:99280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.526 [2024-11-20 14:04:06.602344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:43:43.526 [2024-11-20 14:04:06.602372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:99288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.526 [2024-11-20 14:04:06.602385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:43:43.526 [2024-11-20 14:04:06.602410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:99296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.526 [2024-11-20 14:04:06.602422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:43:43.526 [2024-11-20 14:04:06.602454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.526 [2024-11-20 14:04:06.602466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:43:43.526 [2024-11-20 14:04:06.602491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:99312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.526 [2024-11-20 14:04:06.602502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:43:43.526 8006.09 IOPS, 31.27 MiB/s [2024-11-20T14:04:40.850Z] 7658.00 IOPS, 29.91 MiB/s [2024-11-20T14:04:40.850Z] 7338.92 IOPS, 28.67 MiB/s [2024-11-20T14:04:40.850Z] 7045.36 IOPS, 27.52 MiB/s [2024-11-20T14:04:40.850Z] 6774.38 IOPS, 26.46 MiB/s [2024-11-20T14:04:40.850Z] 6523.48 IOPS, 25.48 MiB/s [2024-11-20T14:04:40.850Z] 6290.50 IOPS, 24.57 MiB/s [2024-11-20T14:04:40.850Z] 6163.72 IOPS, 24.08 MiB/s [2024-11-20T14:04:40.850Z] 6260.67 IOPS, 24.46 MiB/s [2024-11-20T14:04:40.850Z] 6373.29 IOPS, 24.90 MiB/s [2024-11-20T14:04:40.850Z] 6485.88 IOPS, 25.34 MiB/s [2024-11-20T14:04:40.850Z] 6601.58 IOPS, 25.79 MiB/s [2024-11-20T14:04:40.850Z] 6741.91 IOPS, 26.34 MiB/s [2024-11-20T14:04:40.850Z] [2024-11-20 14:04:19.852233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:88032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.527 [2024-11-20 14:04:19.852311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:43:43.527 [2024-11-20 14:04:19.852359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:88040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.527 [2024-11-20 14:04:19.852373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:43:43.527 [2024-11-20 14:04:19.852391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:88048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.527 [2024-11-20 14:04:19.852402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:43:43.527 [2024-11-20 14:04:19.852421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:88056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.527 [2024-11-20 14:04:19.852432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:43:43.527 [2024-11-20 14:04:19.852449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:88064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.527 [2024-11-20 14:04:19.852460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:43:43.527 [2024-11-20 14:04:19.852477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:88072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.527 [2024-11-20 14:04:19.852488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:43:43.527 [2024-11-20 14:04:19.852505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:88080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.527 [2024-11-20 14:04:19.852516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:43:43.527 [2024-11-20 14:04:19.852534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:88088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.527 [2024-11-20 14:04:19.852546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.527 [2024-11-20 14:04:19.852595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:88096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.527 [2024-11-20 14:04:19.852608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.527 [2024-11-20 14:04:19.852646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:88104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.527 [2024-11-20 14:04:19.852658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.527 [2024-11-20 14:04:19.852670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:88112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.527 [2024-11-20 14:04:19.852681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.527 [2024-11-20 14:04:19.852693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:88120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.527 [2024-11-20 14:04:19.852720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.527 [2024-11-20 14:04:19.852733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:88128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.527 [2024-11-20 14:04:19.852743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.527 [2024-11-20 14:04:19.852755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:88136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.527 [2024-11-20 14:04:19.852767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.527 [2024-11-20 14:04:19.852779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.527 [2024-11-20 14:04:19.852790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.527 [2024-11-20 14:04:19.852802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:88152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.527 [2024-11-20 14:04:19.852813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.527 [2024-11-20 14:04:19.852824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:88160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.527 [2024-11-20 14:04:19.852835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.527 [2024-11-20 14:04:19.852860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:88168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.527 [2024-11-20 14:04:19.852872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.527 [2024-11-20 14:04:19.852883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.527 [2024-11-20 14:04:19.852893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.527 [2024-11-20 14:04:19.852905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:87528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.527 [2024-11-20 14:04:19.852915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.527 [2024-11-20 14:04:19.852928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:87536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.527 [2024-11-20 14:04:19.852939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.527 [2024-11-20 14:04:19.852951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:87544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.527 [2024-11-20 14:04:19.852961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.527 [2024-11-20 14:04:19.852980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:87552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.527 [2024-11-20 14:04:19.852991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.527 [2024-11-20 14:04:19.853003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:87560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.527 [2024-11-20 14:04:19.853013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.527 [2024-11-20 14:04:19.853025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.527 [2024-11-20 14:04:19.853034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.527 [2024-11-20 14:04:19.853046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.527 [2024-11-20 14:04:19.853055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.527 [2024-11-20 14:04:19.853066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:88176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.527 [2024-11-20 14:04:19.853076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.527 [2024-11-20 14:04:19.853087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:88184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.527 [2024-11-20 14:04:19.853097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.527 [2024-11-20 14:04:19.853108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:88192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.527 [2024-11-20 14:04:19.853117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.527 [2024-11-20 14:04:19.853128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:88200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.527 [2024-11-20 14:04:19.853138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.527 [2024-11-20 14:04:19.853149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:88208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.527 [2024-11-20 14:04:19.853158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.527 [2024-11-20 14:04:19.853170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:88216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.527 [2024-11-20 14:04:19.853180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.527 [2024-11-20 14:04:19.853191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:88224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.528 [2024-11-20 14:04:19.853202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.528 [2024-11-20 14:04:19.853216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:88232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.528 [2024-11-20 14:04:19.853226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.528 [2024-11-20 14:04:19.853237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:88240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.528 [2024-11-20 14:04:19.853269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.528 [2024-11-20 14:04:19.853281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:88248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.528 [2024-11-20 14:04:19.853292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.528 [2024-11-20 14:04:19.853315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:88256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.528 [2024-11-20 14:04:19.853325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.528 [2024-11-20 14:04:19.853354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:88264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.528 [2024-11-20 14:04:19.853365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.528 [2024-11-20 14:04:19.853376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:88272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.528 [2024-11-20 14:04:19.853388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.528 [2024-11-20 14:04:19.853400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:88280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.528 [2024-11-20 14:04:19.853412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.528 [2024-11-20 14:04:19.853425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:88288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.528 [2024-11-20 14:04:19.853437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.528 [2024-11-20 14:04:19.853450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:88296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.528 [2024-11-20 14:04:19.853460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.528 [2024-11-20 14:04:19.853483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:88304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.528 [2024-11-20 14:04:19.853512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.528 [2024-11-20 14:04:19.853525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:88312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.528 [2024-11-20 14:04:19.853535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.528 [2024-11-20 14:04:19.853548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:88320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.528 [2024-11-20 14:04:19.853560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.528 [2024-11-20 14:04:19.853573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:87584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.528 [2024-11-20 14:04:19.853584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.528 [2024-11-20 14:04:19.853597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:87592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.528 [2024-11-20 14:04:19.853610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.528 [2024-11-20 14:04:19.853628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:87600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.528 [2024-11-20 14:04:19.853640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.528 [2024-11-20 14:04:19.853653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:87608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.528 [2024-11-20 14:04:19.853665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.528 [2024-11-20 14:04:19.853678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:87616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.528 [2024-11-20 14:04:19.853691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.528 [2024-11-20 14:04:19.853704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:87624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.528 [2024-11-20 14:04:19.853715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.528 [2024-11-20 14:04:19.853727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:87632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.528 [2024-11-20 14:04:19.853737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.528 [2024-11-20 14:04:19.853749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:87640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.528 [2024-11-20 14:04:19.853761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.528 [2024-11-20 14:04:19.853774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:87648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.528 [2024-11-20 14:04:19.853795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.528 [2024-11-20 14:04:19.853809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:87656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.528 [2024-11-20 14:04:19.853820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.528 [2024-11-20 14:04:19.853833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:87664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.528 [2024-11-20 14:04:19.853846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.528 [2024-11-20 14:04:19.853859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:87672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.528 [2024-11-20 14:04:19.853870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.528 [2024-11-20 14:04:19.853884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:87680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.528 [2024-11-20 14:04:19.853896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.528 [2024-11-20 14:04:19.853909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:87688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.528 [2024-11-20 14:04:19.853920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.528 [2024-11-20 14:04:19.853934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:87696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.528 [2024-11-20 14:04:19.853951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.528 [2024-11-20 14:04:19.853964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:87704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.528 [2024-11-20 14:04:19.853975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.528 [2024-11-20 14:04:19.853987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:87712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.528 [2024-11-20 14:04:19.853998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.528 [2024-11-20 14:04:19.854010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:87720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.528 [2024-11-20 14:04:19.854021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.528 [2024-11-20 14:04:19.854034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:87728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.528 [2024-11-20 14:04:19.854045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.528 [2024-11-20 14:04:19.854058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:87736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.529 [2024-11-20 14:04:19.854070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.529 [2024-11-20 14:04:19.854083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:87744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.529 [2024-11-20 14:04:19.854093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.529 [2024-11-20 14:04:19.854107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:87752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.529 [2024-11-20 14:04:19.854117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.529 [2024-11-20 14:04:19.854130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:87760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.529 [2024-11-20 14:04:19.854140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.529 [2024-11-20 14:04:19.854152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:87768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.529 [2024-11-20 14:04:19.854163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.529 [2024-11-20 14:04:19.854175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:88328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.529 [2024-11-20 14:04:19.854185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.529 [2024-11-20 14:04:19.854198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:88336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.529 [2024-11-20 14:04:19.854209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.529 [2024-11-20 14:04:19.854222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:88344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.529 [2024-11-20 14:04:19.854232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.529 [2024-11-20 14:04:19.854244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:88352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.529 [2024-11-20 14:04:19.854261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.529 [2024-11-20 14:04:19.854274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:88360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.529 [2024-11-20 14:04:19.854285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.529 [2024-11-20 14:04:19.854297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:88368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.529 [2024-11-20 14:04:19.854308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.529 [2024-11-20 14:04:19.854320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:88376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.529 [2024-11-20 14:04:19.854330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.529 [2024-11-20 14:04:19.854344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:88384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.529 [2024-11-20 14:04:19.854355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.529 [2024-11-20 14:04:19.854367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:88392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.529 [2024-11-20 14:04:19.854378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.529 [2024-11-20 14:04:19.854390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:88400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.529 [2024-11-20 14:04:19.854401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.529 [2024-11-20 14:04:19.854413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:88408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.529 [2024-11-20 14:04:19.854425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.529 [2024-11-20 14:04:19.854448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:87776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.529 [2024-11-20 14:04:19.854459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.529 [2024-11-20 14:04:19.854474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:87784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.529 [2024-11-20 14:04:19.854484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.529 [2024-11-20 14:04:19.854496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.529 [2024-11-20 14:04:19.854506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.529 [2024-11-20 14:04:19.854517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:87800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.529 [2024-11-20 14:04:19.854527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.529 [2024-11-20 14:04:19.854539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:87808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.529 [2024-11-20 14:04:19.854549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.529 [2024-11-20 14:04:19.854565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:87816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.529 [2024-11-20 14:04:19.854575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.529 [2024-11-20 14:04:19.854587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:87824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.529 [2024-11-20 14:04:19.854597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.529 [2024-11-20 14:04:19.854610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:87832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.529 [2024-11-20 14:04:19.854619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.529 [2024-11-20 14:04:19.854630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:87840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.529 [2024-11-20 14:04:19.854641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.529 [2024-11-20 14:04:19.854652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:87848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.529 [2024-11-20 14:04:19.854661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.529 [2024-11-20 14:04:19.854673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:87856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.529 [2024-11-20 14:04:19.854684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.529 [2024-11-20 14:04:19.854696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:87864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.529 [2024-11-20 14:04:19.854706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.529 [2024-11-20 14:04:19.854736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:87872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.529 [2024-11-20 14:04:19.854746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.529 [2024-11-20 14:04:19.854757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:87880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.529 [2024-11-20 14:04:19.854766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.529 [2024-11-20 14:04:19.854776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.529 [2024-11-20 14:04:19.854785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.529 [2024-11-20 14:04:19.854795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:87896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.529 [2024-11-20 14:04:19.854813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.529 [2024-11-20 14:04:19.854827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.529 [2024-11-20 14:04:19.854838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.529 [2024-11-20 14:04:19.854848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:88424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.529 [2024-11-20 14:04:19.854861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.530 [2024-11-20 14:04:19.854871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:88432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.530 [2024-11-20 14:04:19.854881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.530 [2024-11-20 14:04:19.854890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:88440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.530 [2024-11-20 14:04:19.854899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.530 [2024-11-20 14:04:19.854909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:88448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.530 [2024-11-20 14:04:19.854918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.530 [2024-11-20 14:04:19.854927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:88456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.530 [2024-11-20 14:04:19.854937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.530 [2024-11-20 14:04:19.854947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:88464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.530 [2024-11-20 14:04:19.854957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.530 [2024-11-20 14:04:19.854974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:88472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:43.530 [2024-11-20 14:04:19.854984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.530 [2024-11-20 14:04:19.855010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:87904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.530 [2024-11-20 14:04:19.855020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.530 [2024-11-20 14:04:19.855033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:87912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.530 [2024-11-20 14:04:19.855042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.530 [2024-11-20 14:04:19.855053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:87920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.530 [2024-11-20 14:04:19.855064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.530 [2024-11-20 14:04:19.855076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:87928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.530 [2024-11-20 14:04:19.855085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.530 [2024-11-20 14:04:19.855096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:87936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.530 [2024-11-20 14:04:19.855107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.530 [2024-11-20 14:04:19.855120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:87944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.530 [2024-11-20 14:04:19.855129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.530 [2024-11-20 14:04:19.855150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:87952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.530 [2024-11-20 14:04:19.855160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.530 [2024-11-20 14:04:19.855172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:87960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.530 [2024-11-20 14:04:19.855184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.530 [2024-11-20 14:04:19.855197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:87968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.530 [2024-11-20 14:04:19.855208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.530 [2024-11-20 14:04:19.855219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:87976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.530 [2024-11-20 14:04:19.855230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.530 [2024-11-20 14:04:19.855241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:87984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.530 [2024-11-20 14:04:19.855252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.530 [2024-11-20 14:04:19.855263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:87992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.530 [2024-11-20 14:04:19.855272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.530 [2024-11-20 14:04:19.855284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:88000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.530 [2024-11-20 14:04:19.855294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.530 [2024-11-20 14:04:19.855305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:88008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.530 [2024-11-20 14:04:19.855315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.530 [2024-11-20 14:04:19.855328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.530 [2024-11-20 14:04:19.855339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.530 [2024-11-20 14:04:19.855349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203f290 is same with the state(6) to be set 00:43:43.530 [2024-11-20 14:04:19.855380] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:43.530 [2024-11-20 14:04:19.855388] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:43.530 [2024-11-20 14:04:19.855396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88024 len:8 PRP1 0x0 PRP2 0x0 00:43:43.530 [2024-11-20 14:04:19.855407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.530 [2024-11-20 14:04:19.855418] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:43.530 [2024-11-20 14:04:19.855425] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:43.530 [2024-11-20 14:04:19.855433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88480 len:8 PRP1 0x0 PRP2 0x0 00:43:43.530 [2024-11-20 14:04:19.855445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.530 [2024-11-20 14:04:19.855461] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:43.530 [2024-11-20 14:04:19.855469] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:43.530 [2024-11-20 14:04:19.855477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88488 len:8 PRP1 0x0 PRP2 0x0 00:43:43.530 [2024-11-20 14:04:19.855487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.530 [2024-11-20 14:04:19.855497] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:43.530 [2024-11-20 14:04:19.855505] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:43.530 [2024-11-20 14:04:19.855514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88496 len:8 PRP1 0x0 PRP2 0x0 00:43:43.530 [2024-11-20 14:04:19.855524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.530 [2024-11-20 14:04:19.855536] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:43.531 [2024-11-20 14:04:19.855545] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:43.531 [2024-11-20 14:04:19.855555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88504 len:8 PRP1 0x0 PRP2 0x0 00:43:43.531 [2024-11-20 14:04:19.855566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.531 [2024-11-20 14:04:19.855577] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:43.531 [2024-11-20 14:04:19.855584] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:43.531 [2024-11-20 14:04:19.855592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88512 len:8 PRP1 0x0 PRP2 0x0 00:43:43.531 [2024-11-20 14:04:19.855602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.531 [2024-11-20 14:04:19.855612] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:43.531 [2024-11-20 14:04:19.855619] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:43.531 [2024-11-20 14:04:19.855627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88520 len:8 PRP1 0x0 PRP2 0x0 00:43:43.531 [2024-11-20 14:04:19.855637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.531 [2024-11-20 14:04:19.855647] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:43.531 [2024-11-20 14:04:19.855655] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:43.531 [2024-11-20 14:04:19.855662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88528 len:8 PRP1 0x0 PRP2 0x0 00:43:43.531 [2024-11-20 14:04:19.855673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.531 [2024-11-20 14:04:19.855683] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:43.531 [2024-11-20 14:04:19.855691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:43.531 [2024-11-20 14:04:19.855698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88536 len:8 PRP1 0x0 PRP2 0x0 00:43:43.531 [2024-11-20 14:04:19.855709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.531 [2024-11-20 14:04:19.856880] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:43:43.531 [2024-11-20 14:04:19.856958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:43.531 [2024-11-20 14:04:19.856987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:43.531 [2024-11-20 14:04:19.857014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb01d0 (9): Bad file descriptor 00:43:43.531 [2024-11-20 14:04:19.857456] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:43:43.531 [2024-11-20 14:04:19.857495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb01d0 with addr=10.0.0.3, port=4421 00:43:43.531 [2024-11-20 14:04:19.857506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb01d0 is same with the state(6) to be set 00:43:43.531 [2024-11-20 14:04:19.857529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb01d0 (9): Bad file descriptor 00:43:43.531 [2024-11-20 14:04:19.857550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:43:43.531 [2024-11-20 14:04:19.857561] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:43:43.531 [2024-11-20 14:04:19.857588] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:43:43.531 [2024-11-20 14:04:19.857600] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:43:43.531 [2024-11-20 14:04:19.857616] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:43:43.531 6872.20 IOPS, 26.84 MiB/s [2024-11-20T14:04:40.854Z] 6991.92 IOPS, 27.31 MiB/s [2024-11-20T14:04:40.854Z] 7097.78 IOPS, 27.73 MiB/s [2024-11-20T14:04:40.854Z] 7203.68 IOPS, 28.14 MiB/s [2024-11-20T14:04:40.854Z] 7302.56 IOPS, 28.53 MiB/s [2024-11-20T14:04:40.854Z] 7398.85 IOPS, 28.90 MiB/s [2024-11-20T14:04:40.854Z] 7484.98 IOPS, 29.24 MiB/s [2024-11-20T14:04:40.854Z] 7565.62 IOPS, 29.55 MiB/s [2024-11-20T14:04:40.854Z] 7643.81 IOPS, 29.86 MiB/s [2024-11-20T14:04:40.854Z] 7708.09 IOPS, 30.11 MiB/s [2024-11-20T14:04:40.854Z] 7781.07 IOPS, 30.39 MiB/s [2024-11-20T14:04:40.854Z] [2024-11-20 14:04:29.886798] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:43:43.531 7847.74 IOPS, 30.66 MiB/s [2024-11-20T14:04:40.854Z] 7905.11 IOPS, 30.88 MiB/s [2024-11-20T14:04:40.854Z] 7969.08 IOPS, 31.13 MiB/s [2024-11-20T14:04:40.854Z] 8021.31 IOPS, 31.33 MiB/s [2024-11-20T14:04:40.854Z] 8074.32 IOPS, 31.54 MiB/s [2024-11-20T14:04:40.854Z] 8121.80 IOPS, 31.73 MiB/s [2024-11-20T14:04:40.854Z] 8171.23 IOPS, 31.92 MiB/s [2024-11-20T14:04:40.854Z] 8214.11 IOPS, 32.09 MiB/s [2024-11-20T14:04:40.854Z] 8259.04 IOPS, 32.26 MiB/s [2024-11-20T14:04:40.854Z] 8303.35 IOPS, 32.43 MiB/s [2024-11-20T14:04:40.854Z] Received shutdown signal, test time was about 55.086399 seconds 00:43:43.531 00:43:43.531 Latency(us) 00:43:43.531 [2024-11-20T14:04:40.854Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:43.531 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:43:43.531 Verification LBA range: start 0x0 length 0x4000 00:43:43.531 Nvme0n1 : 55.09 8304.25 32.44 0.00 0.00 15389.70 225.37 7033243.39 00:43:43.531 [2024-11-20T14:04:40.854Z] =================================================================================================================== 00:43:43.531 [2024-11-20T14:04:40.854Z] Total : 8304.25 32.44 0.00 0.00 15389.70 225.37 7033243.39 00:43:43.531 14:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:43.531 14:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:43:43.531 14:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:43:43.531 14:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:43:43.531 14:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:43.531 14:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:43:43.531 14:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:43.531 14:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:43:43.531 14:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:43.531 14:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:43.531 rmmod nvme_tcp 00:43:43.531 rmmod nvme_fabrics 00:43:43.531 rmmod nvme_keyring 00:43:43.531 14:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:43.531 14:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:43:43.531 14:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:43:43.531 14:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 80863 ']' 00:43:43.531 14:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 80863 00:43:43.531 14:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 80863 ']' 00:43:43.531 14:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 80863 00:43:43.531 14:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:43:43.531 14:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:43.531 14:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80863 00:43:43.531 killing process with pid 80863 00:43:43.531 14:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:43.531 14:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:43.531 14:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80863' 00:43:43.531 14:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 80863 00:43:43.531 14:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 80863 00:43:43.791 14:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:43:43.791 14:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:43.791 14:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:43.791 14:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:43:43.791 14:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:43:43.791 14:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:43.791 14:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:43:43.791 14:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:43.791 14:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:43:43.791 14:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:43:43.791 14:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:43:43.791 14:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:43:43.791 14:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:43:43.791 14:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:43:43.791 14:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:43:43.792 14:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:43:43.792 14:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:43:43.792 14:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:43:43.792 14:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:43:43.792 14:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:43:43.792 14:04:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:43:43.792 14:04:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:43:43.792 14:04:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:43:43.792 14:04:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:43.792 14:04:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:43.792 14:04:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:43.792 14:04:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:43:43.792 00:43:43.792 real 1m1.477s 00:43:43.792 user 2m50.941s 00:43:43.792 sys 0m17.512s 00:43:43.792 14:04:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:43.792 ************************************ 00:43:43.792 END TEST nvmf_host_multipath 00:43:43.792 14:04:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:43:43.792 ************************************ 00:43:44.050 14:04:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:43:44.050 14:04:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:43:44.050 14:04:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:44.050 14:04:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:43:44.050 ************************************ 00:43:44.050 START TEST nvmf_timeout 00:43:44.050 ************************************ 00:43:44.050 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:43:44.050 * Looking for test storage... 00:43:44.050 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:43:44.050 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:43:44.050 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lcov --version 00:43:44.050 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:43:44.310 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:43:44.310 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:43:44.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:44.311 --rc genhtml_branch_coverage=1 00:43:44.311 --rc genhtml_function_coverage=1 00:43:44.311 --rc genhtml_legend=1 00:43:44.311 --rc geninfo_all_blocks=1 00:43:44.311 --rc geninfo_unexecuted_blocks=1 00:43:44.311 00:43:44.311 ' 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:43:44.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:44.311 --rc genhtml_branch_coverage=1 00:43:44.311 --rc genhtml_function_coverage=1 00:43:44.311 --rc genhtml_legend=1 00:43:44.311 --rc geninfo_all_blocks=1 00:43:44.311 --rc geninfo_unexecuted_blocks=1 00:43:44.311 00:43:44.311 ' 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:43:44.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:44.311 --rc genhtml_branch_coverage=1 00:43:44.311 --rc genhtml_function_coverage=1 00:43:44.311 --rc genhtml_legend=1 00:43:44.311 --rc geninfo_all_blocks=1 00:43:44.311 --rc geninfo_unexecuted_blocks=1 00:43:44.311 00:43:44.311 ' 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:43:44.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:44.311 --rc genhtml_branch_coverage=1 00:43:44.311 --rc genhtml_function_coverage=1 00:43:44.311 --rc genhtml_legend=1 00:43:44.311 --rc geninfo_all_blocks=1 00:43:44.311 --rc geninfo_unexecuted_blocks=1 00:43:44.311 00:43:44.311 ' 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=105ec898-1662-46bd-85be-b241e399edb9 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:44.311 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:44.311 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:44.312 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:44.312 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:43:44.312 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:43:44.312 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:43:44.312 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:43:44.312 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:43:44.312 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:43:44.312 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:44.312 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:43:44.312 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:43:44.312 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:43:44.312 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:44.312 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:43:44.312 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:43:44.312 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:43:44.312 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:43:44.312 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:43:44.312 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:43:44.312 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:44.312 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:43:44.312 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:43:44.312 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:43:44.312 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:43:44.312 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:43:44.312 Cannot find device "nvmf_init_br" 00:43:44.312 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:43:44.312 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:43:44.312 Cannot find device "nvmf_init_br2" 00:43:44.312 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:43:44.312 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:43:44.312 Cannot find device "nvmf_tgt_br" 00:43:44.312 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:43:44.312 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:43:44.312 Cannot find device "nvmf_tgt_br2" 00:43:44.312 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:43:44.312 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:43:44.312 Cannot find device "nvmf_init_br" 00:43:44.312 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:43:44.312 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:43:44.312 Cannot find device "nvmf_init_br2" 00:43:44.312 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:43:44.312 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:43:44.312 Cannot find device "nvmf_tgt_br" 00:43:44.312 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:43:44.312 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:43:44.572 Cannot find device "nvmf_tgt_br2" 00:43:44.572 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:43:44.572 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:43:44.572 Cannot find device "nvmf_br" 00:43:44.572 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:43:44.572 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:43:44.572 Cannot find device "nvmf_init_if" 00:43:44.572 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:43:44.572 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:43:44.572 Cannot find device "nvmf_init_if2" 00:43:44.572 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:43:44.572 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:43:44.572 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:43:44.572 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:43:44.572 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:43:44.572 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:43:44.572 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:43:44.572 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:43:44.572 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:43:44.572 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:43:44.572 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:43:44.572 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:43:44.572 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:43:44.572 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:43:44.572 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:43:44.572 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:43:44.572 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:43:44.572 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:43:44.572 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:43:44.572 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:43:44.572 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:43:44.572 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:43:44.572 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:43:44.572 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:43:44.572 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:43:44.572 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:43:44.572 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:43:44.833 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:43:44.833 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:43:44.833 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:43:44.833 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:43:44.833 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:43:44.833 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:43:44.833 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:43:44.833 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:43:44.833 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:43:44.833 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:43:44.833 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:43:44.833 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:43:44.833 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:43:44.833 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:43:44.833 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.132 ms 00:43:44.833 00:43:44.833 --- 10.0.0.3 ping statistics --- 00:43:44.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:44.833 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:43:44.833 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:43:44.833 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:43:44.833 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.126 ms 00:43:44.833 00:43:44.833 --- 10.0.0.4 ping statistics --- 00:43:44.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:44.833 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:43:44.833 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:43:44.833 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:44.833 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:43:44.833 00:43:44.833 --- 10.0.0.1 ping statistics --- 00:43:44.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:44.833 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:43:44.833 14:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:43:44.833 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:44.833 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:43:44.833 00:43:44.833 --- 10.0.0.2 ping statistics --- 00:43:44.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:44.833 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:43:44.833 14:04:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:44.833 14:04:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:43:44.833 14:04:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:43:44.833 14:04:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:44.834 14:04:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:44.834 14:04:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:44.834 14:04:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:44.834 14:04:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:44.834 14:04:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:44.834 14:04:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:43:44.834 14:04:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:43:44.834 14:04:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:44.834 14:04:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:43:44.834 14:04:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=82082 00:43:44.834 14:04:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:43:44.834 14:04:42 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 82082 00:43:44.834 14:04:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82082 ']' 00:43:44.834 14:04:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:44.834 14:04:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:44.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:44.834 14:04:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:44.834 14:04:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:44.834 14:04:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:43:44.834 [2024-11-20 14:04:42.123007] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:43:44.834 [2024-11-20 14:04:42.123112] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:45.093 [2024-11-20 14:04:42.279960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:43:45.093 [2024-11-20 14:04:42.337640] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:45.093 [2024-11-20 14:04:42.337699] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:45.093 [2024-11-20 14:04:42.337714] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:45.093 [2024-11-20 14:04:42.337722] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:45.093 [2024-11-20 14:04:42.337727] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:45.093 [2024-11-20 14:04:42.338668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:45.093 [2024-11-20 14:04:42.338670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:45.093 [2024-11-20 14:04:42.382452] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:43:46.031 14:04:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:46.031 14:04:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:43:46.031 14:04:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:43:46.031 14:04:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:46.031 14:04:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:43:46.031 14:04:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:46.031 14:04:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:43:46.031 14:04:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:43:46.031 [2024-11-20 14:04:43.275017] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:46.031 14:04:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:43:46.290 Malloc0 00:43:46.290 14:04:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:43:46.550 14:04:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:43:46.809 14:04:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:43:47.069 [2024-11-20 14:04:44.307201] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:43:47.069 14:04:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=82131 00:43:47.069 14:04:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:43:47.069 14:04:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 82131 /var/tmp/bdevperf.sock 00:43:47.069 14:04:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82131 ']' 00:43:47.069 14:04:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:43:47.069 14:04:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:47.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:43:47.069 14:04:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:43:47.069 14:04:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:47.070 14:04:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:43:47.070 [2024-11-20 14:04:44.386437] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:43:47.070 [2024-11-20 14:04:44.386531] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82131 ] 00:43:47.329 [2024-11-20 14:04:44.538445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:47.329 [2024-11-20 14:04:44.597539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:43:47.329 [2024-11-20 14:04:44.642403] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:43:48.264 14:04:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:48.265 14:04:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:43:48.265 14:04:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:43:48.265 14:04:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:43:48.831 NVMe0n1 00:43:48.831 14:04:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=82159 00:43:48.831 14:04:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:43:48.831 14:04:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:43:48.831 Running I/O for 10 seconds... 00:43:49.765 14:04:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:43:50.026 8512.00 IOPS, 33.25 MiB/s [2024-11-20T14:04:47.349Z] [2024-11-20 14:04:47.152597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:77312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:50.026 [2024-11-20 14:04:47.152665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.026 [2024-11-20 14:04:47.152687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:77320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:50.026 [2024-11-20 14:04:47.152695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.026 [2024-11-20 14:04:47.152717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:77328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:50.026 [2024-11-20 14:04:47.152725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.026 [2024-11-20 14:04:47.152734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:77336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:50.026 [2024-11-20 14:04:47.152740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.026 [2024-11-20 14:04:47.152750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:77344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:50.026 [2024-11-20 14:04:47.152757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.027 [2024-11-20 14:04:47.152765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:77352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:50.027 [2024-11-20 14:04:47.152772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.027 [2024-11-20 14:04:47.152781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:77360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:50.027 [2024-11-20 14:04:47.152787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.027 [2024-11-20 14:04:47.152795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:77368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:50.027 [2024-11-20 14:04:47.152801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.027 [2024-11-20 14:04:47.152810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:76800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:50.027 [2024-11-20 14:04:47.152816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.027 [2024-11-20 14:04:47.152824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:76808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:50.027 [2024-11-20 14:04:47.152832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.027 [2024-11-20 14:04:47.152841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:76816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:50.027 [2024-11-20 14:04:47.152848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.027 [2024-11-20 14:04:47.152857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:76824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:50.027 [2024-11-20 14:04:47.152863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.027 [2024-11-20 14:04:47.152872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:76832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:50.027 [2024-11-20 14:04:47.152878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.027 [2024-11-20 14:04:47.152886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:76840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:50.027 [2024-11-20 14:04:47.152893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.027 [2024-11-20 14:04:47.152901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:76848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:50.027 [2024-11-20 14:04:47.152908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.027 [2024-11-20 14:04:47.152916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:76856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:50.027 [2024-11-20 14:04:47.152923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.027 [2024-11-20 14:04:47.152931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:50.027 [2024-11-20 14:04:47.152938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.027 [2024-11-20 14:04:47.152949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:76872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:50.027 [2024-11-20 14:04:47.152956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.027 [2024-11-20 14:04:47.152999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:76880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:50.027 [2024-11-20 14:04:47.153007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.027 [2024-11-20 14:04:47.153019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:76888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:50.027 [2024-11-20 14:04:47.153026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.027 [2024-11-20 14:04:47.153038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:76896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:50.027 [2024-11-20 14:04:47.153045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.027 [2024-11-20 14:04:47.153054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:76904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:50.027 [2024-11-20 14:04:47.153060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.027 [2024-11-20 14:04:47.153070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:76912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:50.027 [2024-11-20 14:04:47.153079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.027 [2024-11-20 14:04:47.153087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:76920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:50.027 [2024-11-20 14:04:47.153093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.027 [2024-11-20 14:04:47.153102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:77376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:50.027 [2024-11-20 14:04:47.153109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.027 [2024-11-20 14:04:47.153117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:77384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:50.027 [2024-11-20 14:04:47.153129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.027 [2024-11-20 14:04:47.153140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:77392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:50.027 [2024-11-20 14:04:47.153146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.027 [2024-11-20 14:04:47.153155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:77400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:50.027 [2024-11-20 14:04:47.153161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.027 [2024-11-20 14:04:47.153173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:77408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:50.027 [2024-11-20 14:04:47.153180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.027 [2024-11-20 14:04:47.153189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:77416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:50.027 [2024-11-20 14:04:47.153197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.027 [2024-11-20 14:04:47.153207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:77424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:50.027 [2024-11-20 14:04:47.153217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.027 [2024-11-20 14:04:47.153227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:50.027 [2024-11-20 14:04:47.153234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.027 [2024-11-20 14:04:47.153242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:76928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:50.027 [2024-11-20 14:04:47.153251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.027 [2024-11-20 14:04:47.153261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:76936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:50.027 [2024-11-20 14:04:47.153268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.027 [2024-11-20 14:04:47.153276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:76944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:50.027 [2024-11-20 14:04:47.153286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.027 [2024-11-20 14:04:47.153295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:76952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:50.027 [2024-11-20 14:04:47.153302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.027 [2024-11-20 14:04:47.153311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:76960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:50.027 [2024-11-20 14:04:47.153317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.027 [2024-11-20 14:04:47.153328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:76968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:50.027 [2024-11-20 14:04:47.153335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.027 [2024-11-20 14:04:47.153344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:76976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:50.027 [2024-11-20 14:04:47.153355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.027 [2024-11-20 14:04:47.153364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:76984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:50.027 [2024-11-20 14:04:47.153370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.027 [2024-11-20 14:04:47.153379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:76992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:50.027 [2024-11-20 14:04:47.153385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.027 [2024-11-20 14:04:47.153398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:77000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:50.027 [2024-11-20 14:04:47.153407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.027 [2024-11-20 14:04:47.153421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:77008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:50.027 [2024-11-20 14:04:47.153431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.027 [2024-11-20 14:04:47.153440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:77016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:50.027 [2024-11-20 14:04:47.153450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.027 [2024-11-20 14:04:47.153459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:77024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:50.028 [2024-11-20 14:04:47.153466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.028 [2024-11-20 14:04:47.153474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:77032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:50.028 [2024-11-20 14:04:47.153481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.028 [2024-11-20 14:04:47.153492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:77040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:50.028 [2024-11-20 14:04:47.153499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.028 [2024-11-20 14:04:47.153508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:77048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:50.028 [2024-11-20 14:04:47.153514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.028 [2024-11-20 14:04:47.153523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:50.028 [2024-11-20 14:04:47.153529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.028 [2024-11-20 14:04:47.153541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:77064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:50.028 [2024-11-20 14:04:47.153548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.028 [2024-11-20 14:04:47.153557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:77072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:50.028 [2024-11-20 14:04:47.153563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.028 [2024-11-20 14:04:47.153572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:77080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:50.028 [2024-11-20 14:04:47.153582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.028 [2024-11-20 14:04:47.153591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:77088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:50.028 [2024-11-20 14:04:47.153598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.028 [2024-11-20 14:04:47.153606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:77096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:50.028 [2024-11-20 14:04:47.153613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.028 [2024-11-20 14:04:47.153630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:77104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:50.028 [2024-11-20 14:04:47.153639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.028 [2024-11-20 14:04:47.153647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:77112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:50.028 [2024-11-20 14:04:47.153654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.028 [2024-11-20 14:04:47.153666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:77440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:50.028 [2024-11-20 14:04:47.153672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.028 [2024-11-20 14:04:47.153680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:77448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:50.028 [2024-11-20 14:04:47.153690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.028 [2024-11-20 14:04:47.153700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:50.028 [2024-11-20 14:04:47.153718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.028 [2024-11-20 14:04:47.153727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:50.028 [2024-11-20 14:04:47.153734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.028 [2024-11-20 14:04:47.153742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:77472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:50.028 [2024-11-20 14:04:47.153749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.028 [2024-11-20 14:04:47.153758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:77480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:50.028 [2024-11-20 14:04:47.153767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.028 [2024-11-20 14:04:47.153776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:77488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:50.028 [2024-11-20 14:04:47.153783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.028 [2024-11-20 14:04:47.153792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:77496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:50.028 [2024-11-20 14:04:47.153799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.028 [2024-11-20 14:04:47.153810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:77504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:50.028 [2024-11-20 14:04:47.153816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.028 [2024-11-20 14:04:47.153826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:77512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:50.028 [2024-11-20 14:04:47.153832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.028 [2024-11-20 14:04:47.153840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:77520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:50.028 [2024-11-20 14:04:47.153847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.028 [2024-11-20 14:04:47.153855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:77528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:50.028 [2024-11-20 14:04:47.153864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.028 [2024-11-20 14:04:47.153872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:77536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:50.028 [2024-11-20 14:04:47.153879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.028 [2024-11-20 14:04:47.153888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:50.028 [2024-11-20 14:04:47.153894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.028 [2024-11-20 14:04:47.153903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:77552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:50.028 [2024-11-20 14:04:47.153909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.028 [2024-11-20 14:04:47.153921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:77560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:50.028 [2024-11-20 14:04:47.153927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.028 [2024-11-20 14:04:47.153935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:77120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:50.028 [2024-11-20 14:04:47.153942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.028 [2024-11-20 14:04:47.153951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:77128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:50.028 [2024-11-20 14:04:47.153957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.028 [2024-11-20 14:04:47.153977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:77136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:50.028 [2024-11-20 14:04:47.153984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.028 [2024-11-20 14:04:47.153993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:77144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:50.028 [2024-11-20 14:04:47.153999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.028 [2024-11-20 14:04:47.154008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:77152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:50.028 [2024-11-20 14:04:47.154018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.028 [2024-11-20 14:04:47.154027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:77160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:50.028 [2024-11-20 14:04:47.154033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.028 [2024-11-20 14:04:47.154042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:77168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:50.028 [2024-11-20 14:04:47.154049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.028 [2024-11-20 14:04:47.154060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:77176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:50.028 [2024-11-20 14:04:47.154067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.028 [2024-11-20 14:04:47.154076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:77184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:50.028 [2024-11-20 14:04:47.154082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.028 [2024-11-20 14:04:47.154091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:77192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:50.028 [2024-11-20 14:04:47.154098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.028 [2024-11-20 14:04:47.154107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:77200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:50.028 [2024-11-20 14:04:47.154117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.028 [2024-11-20 14:04:47.154126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:77208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:50.029 [2024-11-20 14:04:47.154134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.029 [2024-11-20 14:04:47.154142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:77216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:50.029 [2024-11-20 14:04:47.154149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.029 [2024-11-20 14:04:47.154157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:77224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:50.029 [2024-11-20 14:04:47.154166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.029 [2024-11-20 14:04:47.154175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:77232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:50.029 [2024-11-20 14:04:47.154182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.029 [2024-11-20 14:04:47.154190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:50.029 [2024-11-20 14:04:47.154197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.029 [2024-11-20 14:04:47.154208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:77568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:50.029 [2024-11-20 14:04:47.154214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.029 [2024-11-20 14:04:47.154223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:77576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:50.029 [2024-11-20 14:04:47.154230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.029 [2024-11-20 14:04:47.154240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:77584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:50.029 [2024-11-20 14:04:47.154247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.029 [2024-11-20 14:04:47.154259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:77592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:50.029 [2024-11-20 14:04:47.154267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.029 [2024-11-20 14:04:47.154275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:77600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:50.029 [2024-11-20 14:04:47.154281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.029 [2024-11-20 14:04:47.154290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:77608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:50.029 [2024-11-20 14:04:47.154299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.029 [2024-11-20 14:04:47.154308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:77616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:50.029 [2024-11-20 14:04:47.154314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.029 [2024-11-20 14:04:47.154323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:77624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:50.029 [2024-11-20 14:04:47.154329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.029 [2024-11-20 14:04:47.154341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:77248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:50.029 [2024-11-20 14:04:47.154348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.029 [2024-11-20 14:04:47.154358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:77256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:50.029 [2024-11-20 14:04:47.154365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.029 [2024-11-20 14:04:47.154374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:77264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:50.029 [2024-11-20 14:04:47.154380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.029 [2024-11-20 14:04:47.154389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:77272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:50.029 [2024-11-20 14:04:47.154396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.029 [2024-11-20 14:04:47.154407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:77280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:50.029 [2024-11-20 14:04:47.154414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.029 [2024-11-20 14:04:47.154423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:77288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:50.029 [2024-11-20 14:04:47.154429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.029 [2024-11-20 14:04:47.154437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:50.029 [2024-11-20 14:04:47.154446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.029 [2024-11-20 14:04:47.154455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc7f60 is same with the state(6) to be set 00:43:50.029 [2024-11-20 14:04:47.154465] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:50.029 [2024-11-20 14:04:47.154470] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:50.029 [2024-11-20 14:04:47.154476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77304 len:8 PRP1 0x0 PRP2 0x0 00:43:50.029 [2024-11-20 14:04:47.154483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.029 [2024-11-20 14:04:47.154494] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:50.029 [2024-11-20 14:04:47.154500] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:50.029 [2024-11-20 14:04:47.154505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77632 len:8 PRP1 0x0 PRP2 0x0 00:43:50.029 [2024-11-20 14:04:47.154512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.029 [2024-11-20 14:04:47.154519] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:50.029 [2024-11-20 14:04:47.154524] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:50.029 [2024-11-20 14:04:47.154529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77640 len:8 PRP1 0x0 PRP2 0x0 00:43:50.029 [2024-11-20 14:04:47.154538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.029 [2024-11-20 14:04:47.154545] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:50.029 [2024-11-20 14:04:47.154550] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:50.029 [2024-11-20 14:04:47.154555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77648 len:8 PRP1 0x0 PRP2 0x0 00:43:50.029 [2024-11-20 14:04:47.154561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.029 [2024-11-20 14:04:47.154569] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:50.029 [2024-11-20 14:04:47.154578] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:50.029 [2024-11-20 14:04:47.154585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77656 len:8 PRP1 0x0 PRP2 0x0 00:43:50.029 [2024-11-20 14:04:47.154592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.029 [2024-11-20 14:04:47.154599] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:50.029 [2024-11-20 14:04:47.154604] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:50.029 [2024-11-20 14:04:47.154610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77664 len:8 PRP1 0x0 PRP2 0x0 00:43:50.029 [2024-11-20 14:04:47.154619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.029 [2024-11-20 14:04:47.154626] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:50.029 [2024-11-20 14:04:47.154631] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:50.029 [2024-11-20 14:04:47.154636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77672 len:8 PRP1 0x0 PRP2 0x0 00:43:50.029 [2024-11-20 14:04:47.154643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.029 [2024-11-20 14:04:47.154650] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:50.029 [2024-11-20 14:04:47.154655] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:50.029 [2024-11-20 14:04:47.154660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77680 len:8 PRP1 0x0 PRP2 0x0 00:43:50.029 [2024-11-20 14:04:47.154669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.029 [2024-11-20 14:04:47.154677] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:50.029 [2024-11-20 14:04:47.154683] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:50.029 [2024-11-20 14:04:47.154688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77688 len:8 PRP1 0x0 PRP2 0x0 00:43:50.029 [2024-11-20 14:04:47.154695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.029 [2024-11-20 14:04:47.154701] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:50.029 [2024-11-20 14:04:47.154721] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:50.029 [2024-11-20 14:04:47.154727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77696 len:8 PRP1 0x0 PRP2 0x0 00:43:50.029 [2024-11-20 14:04:47.154734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.029 [2024-11-20 14:04:47.154741] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:50.029 [2024-11-20 14:04:47.154746] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:50.029 [2024-11-20 14:04:47.154752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77704 len:8 PRP1 0x0 PRP2 0x0 00:43:50.029 [2024-11-20 14:04:47.154758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.029 [2024-11-20 14:04:47.154765] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:50.029 [2024-11-20 14:04:47.154770] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:50.030 [2024-11-20 14:04:47.154775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77712 len:8 PRP1 0x0 PRP2 0x0 00:43:50.030 [2024-11-20 14:04:47.154784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.030 [2024-11-20 14:04:47.154791] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:50.030 [2024-11-20 14:04:47.154798] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:50.030 [2024-11-20 14:04:47.154804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77720 len:8 PRP1 0x0 PRP2 0x0 00:43:50.030 [2024-11-20 14:04:47.154810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.030 [2024-11-20 14:04:47.154817] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:50.030 [2024-11-20 14:04:47.154823] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:50.030 [2024-11-20 14:04:47.154832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77728 len:8 PRP1 0x0 PRP2 0x0 00:43:50.030 [2024-11-20 14:04:47.154838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.030 [2024-11-20 14:04:47.154845] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:50.030 [2024-11-20 14:04:47.154850] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:50.030 [2024-11-20 14:04:47.154856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77736 len:8 PRP1 0x0 PRP2 0x0 00:43:50.030 [2024-11-20 14:04:47.154862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.030 [2024-11-20 14:04:47.154869] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:50.030 [2024-11-20 14:04:47.154874] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:50.030 [2024-11-20 14:04:47.154882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77744 len:8 PRP1 0x0 PRP2 0x0 00:43:50.030 [2024-11-20 14:04:47.154888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.030 [2024-11-20 14:04:47.154895] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:50.030 [2024-11-20 14:04:47.154901] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:50.030 [2024-11-20 14:04:47.154907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77752 len:8 PRP1 0x0 PRP2 0x0 00:43:50.030 [2024-11-20 14:04:47.154913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.030 [2024-11-20 14:04:47.154920] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:50.030 [2024-11-20 14:04:47.154924] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:50.030 [2024-11-20 14:04:47.154932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77760 len:8 PRP1 0x0 PRP2 0x0 00:43:50.030 [2024-11-20 14:04:47.154939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.030 [2024-11-20 14:04:47.154946] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:50.030 [2024-11-20 14:04:47.154951] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:50.030 [2024-11-20 14:04:47.154956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77768 len:8 PRP1 0x0 PRP2 0x0 00:43:50.030 [2024-11-20 14:04:47.154962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.030 [2024-11-20 14:04:47.154969] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:50.030 [2024-11-20 14:04:47.154976] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:50.030 [2024-11-20 14:04:47.154982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77776 len:8 PRP1 0x0 PRP2 0x0 00:43:50.030 [2024-11-20 14:04:47.155000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.030 [2024-11-20 14:04:47.155007] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:50.030 [2024-11-20 14:04:47.173342] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:50.030 [2024-11-20 14:04:47.173420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77784 len:8 PRP1 0x0 PRP2 0x0 00:43:50.030 [2024-11-20 14:04:47.173436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.030 [2024-11-20 14:04:47.173459] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:50.030 [2024-11-20 14:04:47.173469] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:50.030 [2024-11-20 14:04:47.173477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77792 len:8 PRP1 0x0 PRP2 0x0 00:43:50.030 [2024-11-20 14:04:47.173487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.030 [2024-11-20 14:04:47.173497] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:50.030 [2024-11-20 14:04:47.173505] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:50.030 [2024-11-20 14:04:47.173512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77800 len:8 PRP1 0x0 PRP2 0x0 00:43:50.030 [2024-11-20 14:04:47.173523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.030 [2024-11-20 14:04:47.173532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:50.030 [2024-11-20 14:04:47.173540] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:50.030 [2024-11-20 14:04:47.173548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77808 len:8 PRP1 0x0 PRP2 0x0 00:43:50.030 [2024-11-20 14:04:47.173556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.030 [2024-11-20 14:04:47.173566] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:50.030 [2024-11-20 14:04:47.173573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:50.030 [2024-11-20 14:04:47.173597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77816 len:8 PRP1 0x0 PRP2 0x0 00:43:50.030 [2024-11-20 14:04:47.173606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.030 [2024-11-20 14:04:47.173929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:43:50.030 [2024-11-20 14:04:47.173962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.030 [2024-11-20 14:04:47.173976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:43:50.030 [2024-11-20 14:04:47.173985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.030 [2024-11-20 14:04:47.173995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:43:50.030 [2024-11-20 14:04:47.174005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.030 [2024-11-20 14:04:47.174014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:43:50.030 [2024-11-20 14:04:47.174023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:50.030 [2024-11-20 14:04:47.174031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5ae50 is same with the state(6) to be set 00:43:50.030 [2024-11-20 14:04:47.174383] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:43:50.030 [2024-11-20 14:04:47.174417] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f5ae50 (9): Bad file descriptor 00:43:50.030 [2024-11-20 14:04:47.174535] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:43:50.030 [2024-11-20 14:04:47.174562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f5ae50 with addr=10.0.0.3, port=4420 00:43:50.030 [2024-11-20 14:04:47.174575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5ae50 is same with the state(6) to be set 00:43:50.030 [2024-11-20 14:04:47.174592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f5ae50 (9): Bad file descriptor 00:43:50.030 [2024-11-20 14:04:47.174607] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:43:50.030 [2024-11-20 14:04:47.174616] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:43:50.030 [2024-11-20 14:04:47.174627] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:43:50.030 [2024-11-20 14:04:47.174637] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:43:50.030 [2024-11-20 14:04:47.174648] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:43:50.030 14:04:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:43:51.897 4800.00 IOPS, 18.75 MiB/s [2024-11-20T14:04:49.220Z] 3200.00 IOPS, 12.50 MiB/s [2024-11-20T14:04:49.220Z] [2024-11-20 14:04:49.171112] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:43:51.897 [2024-11-20 14:04:49.171188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f5ae50 with addr=10.0.0.3, port=4420 00:43:51.897 [2024-11-20 14:04:49.171201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5ae50 is same with the state(6) to be set 00:43:51.897 [2024-11-20 14:04:49.171224] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f5ae50 (9): Bad file descriptor 00:43:51.897 [2024-11-20 14:04:49.171239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:43:51.897 [2024-11-20 14:04:49.171247] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:43:51.897 [2024-11-20 14:04:49.171256] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:43:51.897 [2024-11-20 14:04:49.171265] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:43:51.897 [2024-11-20 14:04:49.171274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:43:51.897 14:04:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:43:51.897 14:04:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:43:51.897 14:04:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:43:52.155 14:04:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:43:52.155 14:04:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:43:52.155 14:04:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:43:52.155 14:04:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:43:52.423 14:04:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:43:52.423 14:04:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:43:53.945 2400.00 IOPS, 9.38 MiB/s [2024-11-20T14:04:51.268Z] 1920.00 IOPS, 7.50 MiB/s [2024-11-20T14:04:51.268Z] [2024-11-20 14:04:51.167724] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:43:53.945 [2024-11-20 14:04:51.167797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f5ae50 with addr=10.0.0.3, port=4420 00:43:53.945 [2024-11-20 14:04:51.167810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5ae50 is same with the state(6) to be set 00:43:53.945 [2024-11-20 14:04:51.167833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f5ae50 (9): Bad file descriptor 00:43:53.945 [2024-11-20 14:04:51.167848] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:43:53.945 [2024-11-20 14:04:51.167856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:43:53.945 [2024-11-20 14:04:51.167864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:43:53.945 [2024-11-20 14:04:51.167873] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:43:53.945 [2024-11-20 14:04:51.167882] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:43:55.816 1600.00 IOPS, 6.25 MiB/s [2024-11-20T14:04:53.398Z] 1371.43 IOPS, 5.36 MiB/s [2024-11-20T14:04:53.398Z] [2024-11-20 14:04:53.164176] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:43:56.075 [2024-11-20 14:04:53.164248] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:43:56.075 [2024-11-20 14:04:53.164256] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:43:56.075 [2024-11-20 14:04:53.164266] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:43:56.075 [2024-11-20 14:04:53.164275] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:43:57.012 1200.00 IOPS, 4.69 MiB/s 00:43:57.012 Latency(us) 00:43:57.012 [2024-11-20T14:04:54.335Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:57.012 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:43:57.012 Verification LBA range: start 0x0 length 0x4000 00:43:57.012 NVMe0n1 : 8.14 1178.97 4.61 15.72 0.00 106978.37 3834.86 7033243.39 00:43:57.012 [2024-11-20T14:04:54.335Z] =================================================================================================================== 00:43:57.012 [2024-11-20T14:04:54.335Z] Total : 1178.97 4.61 15.72 0.00 106978.37 3834.86 7033243.39 00:43:57.012 { 00:43:57.012 "results": [ 00:43:57.012 { 00:43:57.012 "job": "NVMe0n1", 00:43:57.012 "core_mask": "0x4", 00:43:57.012 "workload": "verify", 00:43:57.012 "status": "finished", 00:43:57.012 "verify_range": { 00:43:57.012 "start": 0, 00:43:57.012 "length": 16384 00:43:57.012 }, 00:43:57.012 "queue_depth": 128, 00:43:57.012 "io_size": 4096, 00:43:57.012 "runtime": 8.142673, 00:43:57.012 "iops": 1178.9740297811295, 00:43:57.012 "mibps": 4.605367303832537, 00:43:57.012 "io_failed": 128, 00:43:57.012 "io_timeout": 0, 00:43:57.012 "avg_latency_us": 106978.36524936795, 00:43:57.012 "min_latency_us": 3834.8576419213973, 00:43:57.012 "max_latency_us": 7033243.388646288 00:43:57.012 } 00:43:57.012 ], 00:43:57.012 "core_count": 1 00:43:57.012 } 00:43:57.580 14:04:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:43:57.580 14:04:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:43:57.580 14:04:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:43:57.838 14:04:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:43:57.838 14:04:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:43:57.838 14:04:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:43:57.838 14:04:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:43:58.097 14:04:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:43:58.097 14:04:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 82159 00:43:58.097 14:04:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 82131 00:43:58.097 14:04:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82131 ']' 00:43:58.097 14:04:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82131 00:43:58.097 14:04:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:43:58.097 14:04:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:58.097 14:04:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82131 00:43:58.097 killing process with pid 82131 00:43:58.097 Received shutdown signal, test time was about 9.236065 seconds 00:43:58.097 00:43:58.097 Latency(us) 00:43:58.097 [2024-11-20T14:04:55.420Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:58.097 [2024-11-20T14:04:55.420Z] =================================================================================================================== 00:43:58.097 [2024-11-20T14:04:55.420Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:43:58.097 14:04:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:43:58.097 14:04:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:43:58.097 14:04:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82131' 00:43:58.097 14:04:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82131 00:43:58.097 14:04:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82131 00:43:58.356 14:04:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:43:58.615 [2024-11-20 14:04:55.741949] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:43:58.615 14:04:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=82277 00:43:58.615 14:04:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:43:58.615 14:04:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 82277 /var/tmp/bdevperf.sock 00:43:58.615 14:04:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82277 ']' 00:43:58.615 14:04:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:43:58.615 14:04:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:58.615 14:04:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:43:58.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:43:58.615 14:04:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:58.615 14:04:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:43:58.615 [2024-11-20 14:04:55.821371] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:43:58.615 [2024-11-20 14:04:55.821468] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82277 ] 00:43:58.873 [2024-11-20 14:04:55.973783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:58.874 [2024-11-20 14:04:56.057028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:43:58.874 [2024-11-20 14:04:56.135218] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:43:59.809 14:04:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:59.809 14:04:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:43:59.809 14:04:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:43:59.809 14:04:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:44:00.067 NVMe0n1 00:44:00.067 14:04:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=82301 00:44:00.067 14:04:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:44:00.067 14:04:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:44:00.325 Running I/O for 10 seconds... 00:44:01.262 14:04:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:44:01.524 9936.00 IOPS, 38.81 MiB/s [2024-11-20T14:04:58.847Z] [2024-11-20 14:04:58.601122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9640a0 is same with the state(6) to be set 00:44:01.524 [2024-11-20 14:04:58.601201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9640a0 is same with the state(6) to be set 00:44:01.524 [2024-11-20 14:04:58.601208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9640a0 is same with the state(6) to be set 00:44:01.524 [2024-11-20 14:04:58.601345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:90400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:01.524 [2024-11-20 14:04:58.601381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.524 [2024-11-20 14:04:58.601403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:90408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:01.524 [2024-11-20 14:04:58.601411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.524 [2024-11-20 14:04:58.601421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:90416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:01.524 [2024-11-20 14:04:58.601429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.524 [2024-11-20 14:04:58.601438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:01.524 [2024-11-20 14:04:58.601444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.525 [2024-11-20 14:04:58.601453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:90752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:01.525 [2024-11-20 14:04:58.601459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.525 [2024-11-20 14:04:58.601467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:90760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:01.525 [2024-11-20 14:04:58.601474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.525 [2024-11-20 14:04:58.601482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:90768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:01.525 [2024-11-20 14:04:58.601488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.525 [2024-11-20 14:04:58.601495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:90776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:01.525 [2024-11-20 14:04:58.601501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.525 [2024-11-20 14:04:58.601509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:90784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:01.525 [2024-11-20 14:04:58.601515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.525 [2024-11-20 14:04:58.601526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:90792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:01.525 [2024-11-20 14:04:58.601532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.525 [2024-11-20 14:04:58.601540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:90800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:01.525 [2024-11-20 14:04:58.601546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.525 [2024-11-20 14:04:58.601553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:90808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:01.525 [2024-11-20 14:04:58.601559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.525 [2024-11-20 14:04:58.601567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:90816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:01.525 [2024-11-20 14:04:58.601573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.525 [2024-11-20 14:04:58.601581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:90824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:01.525 [2024-11-20 14:04:58.601587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.525 [2024-11-20 14:04:58.601595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:90832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:01.525 [2024-11-20 14:04:58.601601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.525 [2024-11-20 14:04:58.601608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:90840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:01.525 [2024-11-20 14:04:58.601616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.525 [2024-11-20 14:04:58.601624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:90848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:01.525 [2024-11-20 14:04:58.601631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.525 [2024-11-20 14:04:58.601639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:90856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:01.525 [2024-11-20 14:04:58.601645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.525 [2024-11-20 14:04:58.601653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:90864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:01.525 [2024-11-20 14:04:58.601660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.525 [2024-11-20 14:04:58.601669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:90872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:01.525 [2024-11-20 14:04:58.601732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.525 [2024-11-20 14:04:58.601741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:90432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:01.525 [2024-11-20 14:04:58.601750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.525 [2024-11-20 14:04:58.601759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:90440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:01.525 [2024-11-20 14:04:58.601765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.525 [2024-11-20 14:04:58.601773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:90448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:01.525 [2024-11-20 14:04:58.601779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.525 [2024-11-20 14:04:58.601787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:90456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:01.525 [2024-11-20 14:04:58.601797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.525 [2024-11-20 14:04:58.601805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:90464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:01.525 [2024-11-20 14:04:58.601811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.525 [2024-11-20 14:04:58.601819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:90472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:01.525 [2024-11-20 14:04:58.601826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.525 [2024-11-20 14:04:58.601834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:90480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:01.525 [2024-11-20 14:04:58.601841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.525 [2024-11-20 14:04:58.601849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:90488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:01.525 [2024-11-20 14:04:58.601854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.525 [2024-11-20 14:04:58.601862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:90496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:01.525 [2024-11-20 14:04:58.601873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.525 [2024-11-20 14:04:58.601881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:90504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:01.525 [2024-11-20 14:04:58.601888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.525 [2024-11-20 14:04:58.601896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:90512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:01.525 [2024-11-20 14:04:58.601903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.525 [2024-11-20 14:04:58.601911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:90520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:01.525 [2024-11-20 14:04:58.601921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.525 [2024-11-20 14:04:58.601930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:90528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:01.525 [2024-11-20 14:04:58.601938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.525 [2024-11-20 14:04:58.601947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:90536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:01.525 [2024-11-20 14:04:58.601954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.525 [2024-11-20 14:04:58.601966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:90544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:01.525 [2024-11-20 14:04:58.601973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.525 [2024-11-20 14:04:58.601981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:90552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:01.525 [2024-11-20 14:04:58.601987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.525 [2024-11-20 14:04:58.601995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:90880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:01.525 [2024-11-20 14:04:58.602002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.525 [2024-11-20 14:04:58.602013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:90888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:01.525 [2024-11-20 14:04:58.602020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.525 [2024-11-20 14:04:58.602029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:90896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:01.525 [2024-11-20 14:04:58.602035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.525 [2024-11-20 14:04:58.602044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:90904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:01.525 [2024-11-20 14:04:58.602051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.525 [2024-11-20 14:04:58.602063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:90912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:01.525 [2024-11-20 14:04:58.602070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.525 [2024-11-20 14:04:58.602078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:90920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:01.525 [2024-11-20 14:04:58.602085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.525 [2024-11-20 14:04:58.602096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:90928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:01.526 [2024-11-20 14:04:58.602103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.526 [2024-11-20 14:04:58.602111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:90936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:01.526 [2024-11-20 14:04:58.602118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.526 [2024-11-20 14:04:58.602126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:90560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:01.526 [2024-11-20 14:04:58.602133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.526 [2024-11-20 14:04:58.602143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:90568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:01.526 [2024-11-20 14:04:58.602150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.526 [2024-11-20 14:04:58.602158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:90576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:01.526 [2024-11-20 14:04:58.602164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.526 [2024-11-20 14:04:58.602172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:90584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:01.526 [2024-11-20 14:04:58.602178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.526 [2024-11-20 14:04:58.602191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:90592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:01.526 [2024-11-20 14:04:58.602199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.526 [2024-11-20 14:04:58.602208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:90600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:01.526 [2024-11-20 14:04:58.602214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.526 [2024-11-20 14:04:58.602222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:90608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:01.526 [2024-11-20 14:04:58.602228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.526 [2024-11-20 14:04:58.602249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:90616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:01.526 [2024-11-20 14:04:58.602258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.526 [2024-11-20 14:04:58.602283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:90944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:01.526 [2024-11-20 14:04:58.602290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.526 [2024-11-20 14:04:58.602303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:90952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:01.526 [2024-11-20 14:04:58.602310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.526 [2024-11-20 14:04:58.602318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:90960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:01.526 [2024-11-20 14:04:58.602328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.526 [2024-11-20 14:04:58.602336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:90968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:01.526 [2024-11-20 14:04:58.602343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.526 [2024-11-20 14:04:58.602352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:90976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:01.526 [2024-11-20 14:04:58.602358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.526 [2024-11-20 14:04:58.602366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:90984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:01.526 [2024-11-20 14:04:58.602373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.526 [2024-11-20 14:04:58.602386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:90992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:01.526 [2024-11-20 14:04:58.602394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.526 [2024-11-20 14:04:58.602402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:91000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:01.526 [2024-11-20 14:04:58.602409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.526 [2024-11-20 14:04:58.602417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:91008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:01.526 [2024-11-20 14:04:58.602424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.526 [2024-11-20 14:04:58.602435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:91016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:01.526 [2024-11-20 14:04:58.602442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.526 [2024-11-20 14:04:58.602450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:91024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:01.526 [2024-11-20 14:04:58.602457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.526 [2024-11-20 14:04:58.602466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:91032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:01.526 [2024-11-20 14:04:58.602475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.526 [2024-11-20 14:04:58.602484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:91040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:01.526 [2024-11-20 14:04:58.602491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.526 [2024-11-20 14:04:58.602500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:91048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:01.526 [2024-11-20 14:04:58.602506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.526 [2024-11-20 14:04:58.602517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:91056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:01.526 [2024-11-20 14:04:58.602524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.526 [2024-11-20 14:04:58.602532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:91064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:01.526 [2024-11-20 14:04:58.602539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.526 [2024-11-20 14:04:58.602548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:91072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:01.526 [2024-11-20 14:04:58.602555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.526 [2024-11-20 14:04:58.602564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:91080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:01.526 [2024-11-20 14:04:58.602574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.526 [2024-11-20 14:04:58.602583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:91088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:01.526 [2024-11-20 14:04:58.602589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.526 [2024-11-20 14:04:58.602599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:91096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:01.526 [2024-11-20 14:04:58.602605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.526 [2024-11-20 14:04:58.602614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:91104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:01.526 [2024-11-20 14:04:58.602620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.526 [2024-11-20 14:04:58.602628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:91112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:01.526 [2024-11-20 14:04:58.602638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.526 [2024-11-20 14:04:58.602649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:91120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:01.526 [2024-11-20 14:04:58.602655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.526 [2024-11-20 14:04:58.602662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:91128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:01.526 [2024-11-20 14:04:58.602669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.526 [2024-11-20 14:04:58.602678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:90624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:01.526 [2024-11-20 14:04:58.602684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.526 [2024-11-20 14:04:58.602692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:90632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:01.526 [2024-11-20 14:04:58.602698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.526 [2024-11-20 14:04:58.602716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:90640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:01.526 [2024-11-20 14:04:58.602724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.526 [2024-11-20 14:04:58.602732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:90648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:01.526 [2024-11-20 14:04:58.602738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.526 [2024-11-20 14:04:58.602746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:90656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:01.526 [2024-11-20 14:04:58.602753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.527 [2024-11-20 14:04:58.602763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:90664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:01.527 [2024-11-20 14:04:58.602770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.527 [2024-11-20 14:04:58.602778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:90672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:01.527 [2024-11-20 14:04:58.602784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.527 [2024-11-20 14:04:58.602795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:90680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:01.527 [2024-11-20 14:04:58.602802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.527 [2024-11-20 14:04:58.602809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:91136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:01.527 [2024-11-20 14:04:58.602827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.527 [2024-11-20 14:04:58.602835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:91144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:01.527 [2024-11-20 14:04:58.602841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.527 [2024-11-20 14:04:58.602853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:91152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:01.527 [2024-11-20 14:04:58.602860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.527 [2024-11-20 14:04:58.602868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:91160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:01.527 [2024-11-20 14:04:58.602875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.527 [2024-11-20 14:04:58.602883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:91168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:01.527 [2024-11-20 14:04:58.602889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.527 [2024-11-20 14:04:58.602900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:91176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:01.527 [2024-11-20 14:04:58.602909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.527 [2024-11-20 14:04:58.602917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:91184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:01.527 [2024-11-20 14:04:58.602923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.527 [2024-11-20 14:04:58.602931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:91192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:01.527 [2024-11-20 14:04:58.602941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.527 [2024-11-20 14:04:58.602949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:91200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:01.527 [2024-11-20 14:04:58.602955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.527 [2024-11-20 14:04:58.602963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:91208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:01.527 [2024-11-20 14:04:58.602970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.527 [2024-11-20 14:04:58.602978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:91216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:01.527 [2024-11-20 14:04:58.602985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.527 [2024-11-20 14:04:58.603001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:91224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:01.527 [2024-11-20 14:04:58.603009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.527 [2024-11-20 14:04:58.603017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:91232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:01.527 [2024-11-20 14:04:58.603024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.527 [2024-11-20 14:04:58.603032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:91240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:01.527 [2024-11-20 14:04:58.603042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.527 [2024-11-20 14:04:58.603050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:91248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:01.527 [2024-11-20 14:04:58.603057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.527 [2024-11-20 14:04:58.603065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:91256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:01.527 [2024-11-20 14:04:58.603073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.527 [2024-11-20 14:04:58.603080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:90688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:01.527 [2024-11-20 14:04:58.603090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.527 [2024-11-20 14:04:58.603098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:90696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:01.527 [2024-11-20 14:04:58.603108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.527 [2024-11-20 14:04:58.603116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:90704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:01.527 [2024-11-20 14:04:58.603123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.527 [2024-11-20 14:04:58.603131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:90712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:01.527 [2024-11-20 14:04:58.603138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.527 [2024-11-20 14:04:58.603145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:90720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:01.527 [2024-11-20 14:04:58.603155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.527 [2024-11-20 14:04:58.603164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:90728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:01.527 [2024-11-20 14:04:58.603171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.527 [2024-11-20 14:04:58.603180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:90736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:01.527 [2024-11-20 14:04:58.603186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.527 [2024-11-20 14:04:58.603197] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbdf60 is same with the state(6) to be set 00:44:01.527 [2024-11-20 14:04:58.603209] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:01.527 [2024-11-20 14:04:58.603214] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:01.527 [2024-11-20 14:04:58.603220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90744 len:8 PRP1 0x0 PRP2 0x0 00:44:01.527 [2024-11-20 14:04:58.603226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.527 [2024-11-20 14:04:58.603239] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:01.527 [2024-11-20 14:04:58.603244] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:01.527 [2024-11-20 14:04:58.603249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91264 len:8 PRP1 0x0 PRP2 0x0 00:44:01.527 [2024-11-20 14:04:58.603256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.527 [2024-11-20 14:04:58.603268] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:01.527 [2024-11-20 14:04:58.603272] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:01.527 [2024-11-20 14:04:58.603281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91272 len:8 PRP1 0x0 PRP2 0x0 00:44:01.527 [2024-11-20 14:04:58.603288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.527 [2024-11-20 14:04:58.603294] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:01.527 [2024-11-20 14:04:58.603299] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:01.527 [2024-11-20 14:04:58.603305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91280 len:8 PRP1 0x0 PRP2 0x0 00:44:01.527 [2024-11-20 14:04:58.603314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.527 [2024-11-20 14:04:58.603320] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:01.527 [2024-11-20 14:04:58.603325] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:01.527 [2024-11-20 14:04:58.603331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91288 len:8 PRP1 0x0 PRP2 0x0 00:44:01.527 [2024-11-20 14:04:58.603337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.527 [2024-11-20 14:04:58.603344] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:01.527 [2024-11-20 14:04:58.603349] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:01.527 [2024-11-20 14:04:58.603353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91296 len:8 PRP1 0x0 PRP2 0x0 00:44:01.527 [2024-11-20 14:04:58.603360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.527 [2024-11-20 14:04:58.603369] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:01.527 [2024-11-20 14:04:58.603374] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:01.527 [2024-11-20 14:04:58.603379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91304 len:8 PRP1 0x0 PRP2 0x0 00:44:01.527 [2024-11-20 14:04:58.603384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.527 [2024-11-20 14:04:58.603391] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:01.527 [2024-11-20 14:04:58.603395] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:01.528 [2024-11-20 14:04:58.603401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91312 len:8 PRP1 0x0 PRP2 0x0 00:44:01.528 [2024-11-20 14:04:58.603410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.528 [2024-11-20 14:04:58.603419] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:01.528 [2024-11-20 14:04:58.603423] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:01.528 [2024-11-20 14:04:58.603429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91320 len:8 PRP1 0x0 PRP2 0x0 00:44:01.528 [2024-11-20 14:04:58.603434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.528 [2024-11-20 14:04:58.603440] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:01.528 [2024-11-20 14:04:58.603445] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:01.528 [2024-11-20 14:04:58.603450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91328 len:8 PRP1 0x0 PRP2 0x0 00:44:01.528 [2024-11-20 14:04:58.603456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.528 [2024-11-20 14:04:58.603464] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:01.528 [2024-11-20 14:04:58.603472] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:01.528 [2024-11-20 14:04:58.603477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91336 len:8 PRP1 0x0 PRP2 0x0 00:44:01.528 [2024-11-20 14:04:58.603483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.528 [2024-11-20 14:04:58.603489] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:01.528 [2024-11-20 14:04:58.603495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:01.528 [2024-11-20 14:04:58.603500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91344 len:8 PRP1 0x0 PRP2 0x0 00:44:01.528 [2024-11-20 14:04:58.603506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.528 [2024-11-20 14:04:58.603512] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:01.528 [2024-11-20 14:04:58.603517] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:01.528 [2024-11-20 14:04:58.603523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91352 len:8 PRP1 0x0 PRP2 0x0 00:44:01.528 [2024-11-20 14:04:58.603528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.528 [2024-11-20 14:04:58.603539] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:01.528 [2024-11-20 14:04:58.603544] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:01.528 [2024-11-20 14:04:58.603549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91360 len:8 PRP1 0x0 PRP2 0x0 00:44:01.528 [2024-11-20 14:04:58.603555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.528 [2024-11-20 14:04:58.603562] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:01.528 [2024-11-20 14:04:58.603567] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:01.528 [2024-11-20 14:04:58.603572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91368 len:8 PRP1 0x0 PRP2 0x0 00:44:01.528 [2024-11-20 14:04:58.603578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.528 [2024-11-20 14:04:58.603584] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:01.528 [2024-11-20 14:04:58.603588] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:01.528 [2024-11-20 14:04:58.603594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91376 len:8 PRP1 0x0 PRP2 0x0 00:44:01.528 [2024-11-20 14:04:58.603600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.528 [2024-11-20 14:04:58.603609] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:01.528 [2024-11-20 14:04:58.603615] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:01.528 [2024-11-20 14:04:58.603620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91384 len:8 PRP1 0x0 PRP2 0x0 00:44:01.528 [2024-11-20 14:04:58.603625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.528 [2024-11-20 14:04:58.603632] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:01.528 [2024-11-20 14:04:58.603637] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:01.528 [2024-11-20 14:04:58.603643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91392 len:8 PRP1 0x0 PRP2 0x0 00:44:01.528 [2024-11-20 14:04:58.603649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.528 [2024-11-20 14:04:58.603660] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:01.528 [2024-11-20 14:04:58.603665] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:01.528 [2024-11-20 14:04:58.603670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91400 len:8 PRP1 0x0 PRP2 0x0 00:44:01.528 [2024-11-20 14:04:58.603676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.528 [2024-11-20 14:04:58.603686] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:01.528 [2024-11-20 14:04:58.603691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:01.528 [2024-11-20 14:04:58.603696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91408 len:8 PRP1 0x0 PRP2 0x0 00:44:01.528 [2024-11-20 14:04:58.603702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.528 [2024-11-20 14:04:58.603719] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:01.528 [2024-11-20 14:04:58.603724] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:01.528 [2024-11-20 14:04:58.603729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91416 len:8 PRP1 0x0 PRP2 0x0 00:44:01.528 [2024-11-20 14:04:58.603736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:01.528 [2024-11-20 14:04:58.604054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:44:01.528 [2024-11-20 14:04:58.604147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d50e50 (9): Bad file descriptor 00:44:01.528 [2024-11-20 14:04:58.604252] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.528 [2024-11-20 14:04:58.604273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d50e50 with addr=10.0.0.3, port=4420 00:44:01.528 [2024-11-20 14:04:58.604281] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d50e50 is same with the state(6) to be set 00:44:01.528 [2024-11-20 14:04:58.604294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d50e50 (9): Bad file descriptor 00:44:01.528 [2024-11-20 14:04:58.604319] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:44:01.528 [2024-11-20 14:04:58.604327] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:44:01.528 [2024-11-20 14:04:58.604338] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:44:01.528 [2024-11-20 14:04:58.604347] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:44:01.528 [2024-11-20 14:04:58.604355] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:44:01.528 14:04:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:44:02.466 5650.00 IOPS, 22.07 MiB/s [2024-11-20T14:04:59.789Z] [2024-11-20 14:04:59.602605] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.466 [2024-11-20 14:04:59.602690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d50e50 with addr=10.0.0.3, port=4420 00:44:02.466 [2024-11-20 14:04:59.602718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d50e50 is same with the state(6) to be set 00:44:02.466 [2024-11-20 14:04:59.602747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d50e50 (9): Bad file descriptor 00:44:02.466 [2024-11-20 14:04:59.602774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:44:02.466 [2024-11-20 14:04:59.602782] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:44:02.466 [2024-11-20 14:04:59.602791] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:44:02.466 [2024-11-20 14:04:59.602806] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:44:02.466 [2024-11-20 14:04:59.602816] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:44:02.466 14:04:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:44:02.726 [2024-11-20 14:04:59.855318] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:44:02.726 14:04:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 82301 00:44:03.560 3766.67 IOPS, 14.71 MiB/s [2024-11-20T14:05:00.883Z] [2024-11-20 14:05:00.618157] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:44:05.444 2825.00 IOPS, 11.04 MiB/s [2024-11-20T14:05:03.703Z] 4189.00 IOPS, 16.36 MiB/s [2024-11-20T14:05:04.639Z] 5368.17 IOPS, 20.97 MiB/s [2024-11-20T14:05:05.574Z] 6212.71 IOPS, 24.27 MiB/s [2024-11-20T14:05:06.510Z] 6848.12 IOPS, 26.75 MiB/s [2024-11-20T14:05:07.890Z] 7321.00 IOPS, 28.60 MiB/s [2024-11-20T14:05:07.890Z] 7702.50 IOPS, 30.09 MiB/s 00:44:10.567 Latency(us) 00:44:10.567 [2024-11-20T14:05:07.890Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:10.567 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:44:10.567 Verification LBA range: start 0x0 length 0x4000 00:44:10.567 NVMe0n1 : 10.01 7708.23 30.11 0.00 0.00 16577.21 1073.19 3018433.62 00:44:10.567 [2024-11-20T14:05:07.890Z] =================================================================================================================== 00:44:10.567 [2024-11-20T14:05:07.890Z] Total : 7708.23 30.11 0.00 0.00 16577.21 1073.19 3018433.62 00:44:10.567 { 00:44:10.567 "results": [ 00:44:10.567 { 00:44:10.567 "job": "NVMe0n1", 00:44:10.567 "core_mask": "0x4", 00:44:10.567 "workload": "verify", 00:44:10.567 "status": "finished", 00:44:10.567 "verify_range": { 00:44:10.567 "start": 0, 00:44:10.567 "length": 16384 00:44:10.567 }, 00:44:10.567 "queue_depth": 128, 00:44:10.567 "io_size": 4096, 00:44:10.567 "runtime": 10.007101, 00:44:10.567 "iops": 7708.226388441568, 00:44:10.567 "mibps": 30.110259329849875, 00:44:10.567 "io_failed": 0, 00:44:10.567 "io_timeout": 0, 00:44:10.567 "avg_latency_us": 16577.20941576585, 00:44:10.567 "min_latency_us": 1073.1877729257642, 00:44:10.567 "max_latency_us": 3018433.6209606985 00:44:10.567 } 00:44:10.567 ], 00:44:10.567 "core_count": 1 00:44:10.567 } 00:44:10.567 14:05:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=82411 00:44:10.567 14:05:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:44:10.568 14:05:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:44:10.568 Running I/O for 10 seconds... 00:44:11.507 14:05:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:44:11.507 10464.00 IOPS, 40.88 MiB/s [2024-11-20T14:05:08.830Z] [2024-11-20 14:05:08.699975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:94144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.507 [2024-11-20 14:05:08.700043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.507 [2024-11-20 14:05:08.700061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:94152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.507 [2024-11-20 14:05:08.700085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.507 [2024-11-20 14:05:08.700094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:94160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.507 [2024-11-20 14:05:08.700100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.507 [2024-11-20 14:05:08.700107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.507 [2024-11-20 14:05:08.700113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.507 [2024-11-20 14:05:08.700120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:94176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.507 [2024-11-20 14:05:08.700127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.507 [2024-11-20 14:05:08.700134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:94184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.507 [2024-11-20 14:05:08.700139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.507 [2024-11-20 14:05:08.700146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:94192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.507 [2024-11-20 14:05:08.700153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.507 [2024-11-20 14:05:08.700164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:94200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.507 [2024-11-20 14:05:08.700170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.507 [2024-11-20 14:05:08.700177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:93696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.507 [2024-11-20 14:05:08.700182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.507 [2024-11-20 14:05:08.700189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:93704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.507 [2024-11-20 14:05:08.700195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.507 [2024-11-20 14:05:08.700202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:93712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.507 [2024-11-20 14:05:08.700208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.507 [2024-11-20 14:05:08.700214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:93720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.507 [2024-11-20 14:05:08.700220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.507 [2024-11-20 14:05:08.700227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:93728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.507 [2024-11-20 14:05:08.700232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.507 [2024-11-20 14:05:08.700239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:93736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.507 [2024-11-20 14:05:08.700244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.507 [2024-11-20 14:05:08.700251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:93744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.507 [2024-11-20 14:05:08.700256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.507 [2024-11-20 14:05:08.700264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:93752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.507 [2024-11-20 14:05:08.700269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.507 [2024-11-20 14:05:08.700275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:94208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.507 [2024-11-20 14:05:08.700281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.507 [2024-11-20 14:05:08.700291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:94216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.507 [2024-11-20 14:05:08.700297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.507 [2024-11-20 14:05:08.700303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:94224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.507 [2024-11-20 14:05:08.700308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.507 [2024-11-20 14:05:08.700315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:94232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.507 [2024-11-20 14:05:08.700321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.507 [2024-11-20 14:05:08.700328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:94240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.507 [2024-11-20 14:05:08.700333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.507 [2024-11-20 14:05:08.700340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:94248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.507 [2024-11-20 14:05:08.700345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.507 [2024-11-20 14:05:08.700352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:94256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.507 [2024-11-20 14:05:08.700358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.507 [2024-11-20 14:05:08.700368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:94264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.507 [2024-11-20 14:05:08.700374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.507 [2024-11-20 14:05:08.700381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.507 [2024-11-20 14:05:08.700386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.507 [2024-11-20 14:05:08.700393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.507 [2024-11-20 14:05:08.700398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.508 [2024-11-20 14:05:08.700405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:94288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.508 [2024-11-20 14:05:08.700411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.508 [2024-11-20 14:05:08.700418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:94296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.508 [2024-11-20 14:05:08.700423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.508 [2024-11-20 14:05:08.700429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.508 [2024-11-20 14:05:08.700435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.508 [2024-11-20 14:05:08.700442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:94312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.508 [2024-11-20 14:05:08.700448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.508 [2024-11-20 14:05:08.700455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:94320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.508 [2024-11-20 14:05:08.700461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.508 [2024-11-20 14:05:08.700468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:94328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.508 [2024-11-20 14:05:08.700474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.508 [2024-11-20 14:05:08.700481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:93760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.508 [2024-11-20 14:05:08.700488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.508 [2024-11-20 14:05:08.700497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:93768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.508 [2024-11-20 14:05:08.700503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.508 [2024-11-20 14:05:08.700529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:93776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.508 [2024-11-20 14:05:08.700535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.508 [2024-11-20 14:05:08.700543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:93784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.508 [2024-11-20 14:05:08.700549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.508 [2024-11-20 14:05:08.700556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:93792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.508 [2024-11-20 14:05:08.700588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.508 [2024-11-20 14:05:08.700597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:93800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.508 [2024-11-20 14:05:08.700603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.508 [2024-11-20 14:05:08.700610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:93808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.508 [2024-11-20 14:05:08.700616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.508 [2024-11-20 14:05:08.700623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:93816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.508 [2024-11-20 14:05:08.700628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.508 [2024-11-20 14:05:08.700635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:94336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.508 [2024-11-20 14:05:08.700644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.508 [2024-11-20 14:05:08.700651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:94344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.508 [2024-11-20 14:05:08.700657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.508 [2024-11-20 14:05:08.700663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:94352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.508 [2024-11-20 14:05:08.700669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.508 [2024-11-20 14:05:08.700675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:94360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.508 [2024-11-20 14:05:08.700681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.508 [2024-11-20 14:05:08.700691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:94368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.508 [2024-11-20 14:05:08.700698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.508 [2024-11-20 14:05:08.700715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:94376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.508 [2024-11-20 14:05:08.700722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.508 [2024-11-20 14:05:08.700730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:94384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.508 [2024-11-20 14:05:08.700736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.508 [2024-11-20 14:05:08.700756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:94392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.508 [2024-11-20 14:05:08.700762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.508 [2024-11-20 14:05:08.700770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.508 [2024-11-20 14:05:08.700775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.508 [2024-11-20 14:05:08.700784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:94408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.508 [2024-11-20 14:05:08.700790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.508 [2024-11-20 14:05:08.700798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:94416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.508 [2024-11-20 14:05:08.700805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.508 [2024-11-20 14:05:08.700812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:94424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.508 [2024-11-20 14:05:08.700818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.508 [2024-11-20 14:05:08.700825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:94432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.508 [2024-11-20 14:05:08.700830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.508 [2024-11-20 14:05:08.700840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:94440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.508 [2024-11-20 14:05:08.700846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.508 [2024-11-20 14:05:08.700854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:94448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.508 [2024-11-20 14:05:08.700859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.508 [2024-11-20 14:05:08.700866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:94456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.508 [2024-11-20 14:05:08.700871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.508 [2024-11-20 14:05:08.700878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:94464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.508 [2024-11-20 14:05:08.700883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.508 [2024-11-20 14:05:08.700894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:94472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.508 [2024-11-20 14:05:08.700899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.508 [2024-11-20 14:05:08.700906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:94480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.508 [2024-11-20 14:05:08.700912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.508 [2024-11-20 14:05:08.700920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:94488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.508 [2024-11-20 14:05:08.700925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.508 [2024-11-20 14:05:08.700935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:93824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.508 [2024-11-20 14:05:08.700941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.508 [2024-11-20 14:05:08.700949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:93832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.508 [2024-11-20 14:05:08.700955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.508 [2024-11-20 14:05:08.700962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:93840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.508 [2024-11-20 14:05:08.700967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.508 [2024-11-20 14:05:08.700974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:93848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.508 [2024-11-20 14:05:08.700980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.508 [2024-11-20 14:05:08.701003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:93856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.509 [2024-11-20 14:05:08.701010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.509 [2024-11-20 14:05:08.701034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:93864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.509 [2024-11-20 14:05:08.701042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.509 [2024-11-20 14:05:08.701063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:93872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.509 [2024-11-20 14:05:08.701069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.509 [2024-11-20 14:05:08.701088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:93880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.509 [2024-11-20 14:05:08.701108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.509 [2024-11-20 14:05:08.701117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:93888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.509 [2024-11-20 14:05:08.701123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.509 [2024-11-20 14:05:08.701132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:93896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.509 [2024-11-20 14:05:08.701139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.509 [2024-11-20 14:05:08.701146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:93904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.509 [2024-11-20 14:05:08.701152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.509 [2024-11-20 14:05:08.701160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:93912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.509 [2024-11-20 14:05:08.701169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.509 [2024-11-20 14:05:08.701176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:93920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.509 [2024-11-20 14:05:08.701181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.509 [2024-11-20 14:05:08.701188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:93928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.509 [2024-11-20 14:05:08.701194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.509 [2024-11-20 14:05:08.701203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:93936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.509 [2024-11-20 14:05:08.701209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.509 [2024-11-20 14:05:08.701217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:93944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.509 [2024-11-20 14:05:08.701223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.509 [2024-11-20 14:05:08.701229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:94496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.509 [2024-11-20 14:05:08.701235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.509 [2024-11-20 14:05:08.701244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:94504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.509 [2024-11-20 14:05:08.701249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.509 [2024-11-20 14:05:08.701256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:94512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.509 [2024-11-20 14:05:08.701262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.509 [2024-11-20 14:05:08.701269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:94520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.509 [2024-11-20 14:05:08.701277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.509 [2024-11-20 14:05:08.701284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:94528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.509 [2024-11-20 14:05:08.701289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.509 [2024-11-20 14:05:08.701297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.509 [2024-11-20 14:05:08.701302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.509 [2024-11-20 14:05:08.701309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:94544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.509 [2024-11-20 14:05:08.701317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.509 [2024-11-20 14:05:08.701324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:94552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.509 [2024-11-20 14:05:08.701330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.509 [2024-11-20 14:05:08.701336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:94560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.509 [2024-11-20 14:05:08.701342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.509 [2024-11-20 14:05:08.701348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:94568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.509 [2024-11-20 14:05:08.701354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.509 [2024-11-20 14:05:08.701363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:94576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.509 [2024-11-20 14:05:08.701369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.509 [2024-11-20 14:05:08.701376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:94584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.509 [2024-11-20 14:05:08.701381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.509 [2024-11-20 14:05:08.701388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.509 [2024-11-20 14:05:08.701396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.509 [2024-11-20 14:05:08.701403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:94600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.509 [2024-11-20 14:05:08.701410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.509 [2024-11-20 14:05:08.701418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:93952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.509 [2024-11-20 14:05:08.701423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.509 [2024-11-20 14:05:08.701430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:93960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.509 [2024-11-20 14:05:08.701436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.509 [2024-11-20 14:05:08.701445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:93968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.509 [2024-11-20 14:05:08.701451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.509 [2024-11-20 14:05:08.701458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:93976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.509 [2024-11-20 14:05:08.701463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.509 [2024-11-20 14:05:08.701470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:93984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.509 [2024-11-20 14:05:08.701476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.509 [2024-11-20 14:05:08.701483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:93992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.509 [2024-11-20 14:05:08.701488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.509 [2024-11-20 14:05:08.701498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:94000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.509 [2024-11-20 14:05:08.701503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.509 [2024-11-20 14:05:08.701511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:94008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.509 [2024-11-20 14:05:08.701516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.509 [2024-11-20 14:05:08.701523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:94608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.509 [2024-11-20 14:05:08.701529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.509 [2024-11-20 14:05:08.701536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.509 [2024-11-20 14:05:08.701541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.509 [2024-11-20 14:05:08.701551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:94624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.509 [2024-11-20 14:05:08.701557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.509 [2024-11-20 14:05:08.701563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:94632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.509 [2024-11-20 14:05:08.701568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.509 [2024-11-20 14:05:08.701575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.509 [2024-11-20 14:05:08.701584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.509 [2024-11-20 14:05:08.701593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:94648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.509 [2024-11-20 14:05:08.701599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.510 [2024-11-20 14:05:08.701606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:94656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.510 [2024-11-20 14:05:08.701611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.510 [2024-11-20 14:05:08.701618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:94664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.510 [2024-11-20 14:05:08.701626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.510 [2024-11-20 14:05:08.701634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:94672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.510 [2024-11-20 14:05:08.701639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.510 [2024-11-20 14:05:08.701646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:94680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.510 [2024-11-20 14:05:08.701652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.510 [2024-11-20 14:05:08.701661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:94688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.510 [2024-11-20 14:05:08.701667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.510 [2024-11-20 14:05:08.701674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:94696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.510 [2024-11-20 14:05:08.701680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.510 [2024-11-20 14:05:08.701687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:94704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.510 [2024-11-20 14:05:08.701692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.510 [2024-11-20 14:05:08.701702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:94712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:44:11.510 [2024-11-20 14:05:08.701716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.510 [2024-11-20 14:05:08.701724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:94016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.510 [2024-11-20 14:05:08.701729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.510 [2024-11-20 14:05:08.701737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:94024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.510 [2024-11-20 14:05:08.701745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.510 [2024-11-20 14:05:08.701752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:94032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.510 [2024-11-20 14:05:08.701758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.510 [2024-11-20 14:05:08.701766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:94040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.510 [2024-11-20 14:05:08.701771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.510 [2024-11-20 14:05:08.701780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:94048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.510 [2024-11-20 14:05:08.701786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.510 [2024-11-20 14:05:08.701803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:94056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.510 [2024-11-20 14:05:08.701810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.510 [2024-11-20 14:05:08.701817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:94064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.510 [2024-11-20 14:05:08.701826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.510 [2024-11-20 14:05:08.701834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:94072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.510 [2024-11-20 14:05:08.701839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.510 [2024-11-20 14:05:08.701848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:94080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.510 [2024-11-20 14:05:08.701853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.510 [2024-11-20 14:05:08.701861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:94088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.510 [2024-11-20 14:05:08.701867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.510 [2024-11-20 14:05:08.701874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:94096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.510 [2024-11-20 14:05:08.701882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.510 [2024-11-20 14:05:08.701889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:94104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.510 [2024-11-20 14:05:08.701896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.510 [2024-11-20 14:05:08.701903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:94112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.510 [2024-11-20 14:05:08.701909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.510 [2024-11-20 14:05:08.701916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:94120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.510 [2024-11-20 14:05:08.701924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.510 [2024-11-20 14:05:08.701931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:94128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:11.510 [2024-11-20 14:05:08.701938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.510 [2024-11-20 14:05:08.701945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbf0e0 is same with the state(6) to be set 00:44:11.510 [2024-11-20 14:05:08.701954] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:11.510 [2024-11-20 14:05:08.701959] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:11.510 [2024-11-20 14:05:08.701967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94136 len:8 PRP1 0x0 PRP2 0x0 00:44:11.510 [2024-11-20 14:05:08.701973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.510 [2024-11-20 14:05:08.702109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:44:11.510 [2024-11-20 14:05:08.702128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.510 [2024-11-20 14:05:08.702136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:44:11.510 [2024-11-20 14:05:08.702142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.510 [2024-11-20 14:05:08.702149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:44:11.510 [2024-11-20 14:05:08.702155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.510 [2024-11-20 14:05:08.702162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:44:11.510 [2024-11-20 14:05:08.702172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:11.510 [2024-11-20 14:05:08.702178] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d50e50 is same with the state(6) to be set 00:44:11.510 [2024-11-20 14:05:08.702360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:44:11.510 [2024-11-20 14:05:08.702382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d50e50 (9): Bad file descriptor 00:44:11.510 [2024-11-20 14:05:08.702477] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:44:11.510 [2024-11-20 14:05:08.702494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d50e50 with addr=10.0.0.3, port=4420 00:44:11.510 [2024-11-20 14:05:08.702502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d50e50 is same with the state(6) to be set 00:44:11.510 [2024-11-20 14:05:08.702514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d50e50 (9): Bad file descriptor 00:44:11.510 [2024-11-20 14:05:08.702525] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:44:11.510 [2024-11-20 14:05:08.702531] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:44:11.510 [2024-11-20 14:05:08.702540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:44:11.510 [2024-11-20 14:05:08.702548] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:44:11.510 [2024-11-20 14:05:08.702556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:44:11.510 14:05:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:44:12.460 5856.00 IOPS, 22.88 MiB/s [2024-11-20T14:05:09.783Z] [2024-11-20 14:05:09.700796] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:44:12.460 [2024-11-20 14:05:09.700862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d50e50 with addr=10.0.0.3, port=4420 00:44:12.460 [2024-11-20 14:05:09.700873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d50e50 is same with the state(6) to be set 00:44:12.460 [2024-11-20 14:05:09.700908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d50e50 (9): Bad file descriptor 00:44:12.460 [2024-11-20 14:05:09.700921] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:44:12.460 [2024-11-20 14:05:09.700927] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:44:12.461 [2024-11-20 14:05:09.700936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:44:12.461 [2024-11-20 14:05:09.700957] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:44:12.461 [2024-11-20 14:05:09.700964] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:44:13.398 3904.00 IOPS, 15.25 MiB/s [2024-11-20T14:05:10.721Z] [2024-11-20 14:05:10.699215] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:44:13.398 [2024-11-20 14:05:10.699276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d50e50 with addr=10.0.0.3, port=4420 00:44:13.398 [2024-11-20 14:05:10.699291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d50e50 is same with the state(6) to be set 00:44:13.398 [2024-11-20 14:05:10.699320] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d50e50 (9): Bad file descriptor 00:44:13.398 [2024-11-20 14:05:10.699341] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:44:13.398 [2024-11-20 14:05:10.699353] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:44:13.398 [2024-11-20 14:05:10.699368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:44:13.398 [2024-11-20 14:05:10.699386] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:44:13.398 [2024-11-20 14:05:10.699401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:44:14.595 2928.00 IOPS, 11.44 MiB/s [2024-11-20T14:05:11.918Z] [2024-11-20 14:05:11.700256] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:44:14.595 [2024-11-20 14:05:11.700367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d50e50 with addr=10.0.0.3, port=4420 00:44:14.595 [2024-11-20 14:05:11.700380] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d50e50 is same with the state(6) to be set 00:44:14.595 [2024-11-20 14:05:11.700577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d50e50 (9): Bad file descriptor 00:44:14.595 [2024-11-20 14:05:11.700775] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:44:14.595 [2024-11-20 14:05:11.700790] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:44:14.595 [2024-11-20 14:05:11.700828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:44:14.595 [2024-11-20 14:05:11.700839] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:44:14.595 [2024-11-20 14:05:11.700850] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:44:14.595 14:05:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:44:14.853 [2024-11-20 14:05:11.976459] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:44:14.853 14:05:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 82411 00:44:15.420 2342.40 IOPS, 9.15 MiB/s [2024-11-20T14:05:12.743Z] [2024-11-20 14:05:12.722778] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:44:17.290 3177.83 IOPS, 12.41 MiB/s [2024-11-20T14:05:15.989Z] 4195.29 IOPS, 16.39 MiB/s [2024-11-20T14:05:16.925Z] 4944.38 IOPS, 19.31 MiB/s [2024-11-20T14:05:17.861Z] 5538.56 IOPS, 21.63 MiB/s [2024-11-20T14:05:17.861Z] 6010.50 IOPS, 23.48 MiB/s 00:44:20.538 Latency(us) 00:44:20.538 [2024-11-20T14:05:17.861Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:20.538 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:44:20.538 Verification LBA range: start 0x0 length 0x4000 00:44:20.538 NVMe0n1 : 10.01 6016.74 23.50 4492.25 0.00 12160.13 618.87 3018433.62 00:44:20.538 [2024-11-20T14:05:17.861Z] =================================================================================================================== 00:44:20.538 [2024-11-20T14:05:17.861Z] Total : 6016.74 23.50 4492.25 0.00 12160.13 0.00 3018433.62 00:44:20.538 { 00:44:20.538 "results": [ 00:44:20.538 { 00:44:20.538 "job": "NVMe0n1", 00:44:20.538 "core_mask": "0x4", 00:44:20.538 "workload": "verify", 00:44:20.538 "status": "finished", 00:44:20.538 "verify_range": { 00:44:20.538 "start": 0, 00:44:20.538 "length": 16384 00:44:20.538 }, 00:44:20.538 "queue_depth": 128, 00:44:20.538 "io_size": 4096, 00:44:20.538 "runtime": 10.007904, 00:44:20.538 "iops": 6016.744365253703, 00:44:20.538 "mibps": 23.502907676772278, 00:44:20.538 "io_failed": 44958, 00:44:20.538 "io_timeout": 0, 00:44:20.538 "avg_latency_us": 12160.131628715542, 00:44:20.538 "min_latency_us": 618.871615720524, 00:44:20.538 "max_latency_us": 3018433.6209606985 00:44:20.538 } 00:44:20.538 ], 00:44:20.538 "core_count": 1 00:44:20.538 } 00:44:20.538 14:05:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 82277 00:44:20.538 14:05:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82277 ']' 00:44:20.538 14:05:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82277 00:44:20.538 14:05:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:44:20.538 14:05:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:20.538 14:05:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82277 00:44:20.538 14:05:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:44:20.538 14:05:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:44:20.538 killing process with pid 82277 00:44:20.538 14:05:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82277' 00:44:20.538 14:05:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82277 00:44:20.538 Received shutdown signal, test time was about 10.000000 seconds 00:44:20.538 00:44:20.538 Latency(us) 00:44:20.538 [2024-11-20T14:05:17.861Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:20.538 [2024-11-20T14:05:17.861Z] =================================================================================================================== 00:44:20.538 [2024-11-20T14:05:17.861Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:44:20.538 14:05:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82277 00:44:20.797 14:05:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:44:20.797 14:05:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=82524 00:44:20.797 14:05:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 82524 /var/tmp/bdevperf.sock 00:44:20.797 14:05:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82524 ']' 00:44:20.797 14:05:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:44:20.797 14:05:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:20.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:44:20.797 14:05:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:44:20.797 14:05:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:20.797 14:05:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:44:20.797 [2024-11-20 14:05:17.977194] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:44:20.797 [2024-11-20 14:05:17.977274] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82524 ] 00:44:21.056 [2024-11-20 14:05:18.126238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:21.056 [2024-11-20 14:05:18.206000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:44:21.056 [2024-11-20 14:05:18.281499] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:44:21.623 14:05:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:21.623 14:05:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:44:21.623 14:05:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=82536 00:44:21.623 14:05:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 82524 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:44:21.623 14:05:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:44:21.881 14:05:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:44:22.139 NVMe0n1 00:44:22.397 14:05:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=82577 00:44:22.397 14:05:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:44:22.397 14:05:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:44:22.397 Running I/O for 10 seconds... 00:44:23.331 14:05:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:44:23.593 17276.00 IOPS, 67.48 MiB/s [2024-11-20T14:05:20.916Z] [2024-11-20 14:05:20.666850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968ac0 is same with the state(6) to be set 00:44:23.593 [2024-11-20 14:05:20.666941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968ac0 is same with the state(6) to be set 00:44:23.593 [2024-11-20 14:05:20.666950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968ac0 is same with the state(6) to be set 00:44:23.593 [2024-11-20 14:05:20.666956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968ac0 is same with the state(6) to be set 00:44:23.593 [2024-11-20 14:05:20.666962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968ac0 is same with the state(6) to be set 00:44:23.593 [2024-11-20 14:05:20.666968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968ac0 is same with the state(6) to be set 00:44:23.593 [2024-11-20 14:05:20.666973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968ac0 is same with the state(6) to be set 00:44:23.593 [2024-11-20 14:05:20.666979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968ac0 is same with the state(6) to be set 00:44:23.593 [2024-11-20 14:05:20.666985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968ac0 is same with the state(6) to be set 00:44:23.593 [2024-11-20 14:05:20.666990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968ac0 is same with the state(6) to be set 00:44:23.593 [2024-11-20 14:05:20.666995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968ac0 is same with the state(6) to be set 00:44:23.593 [2024-11-20 14:05:20.667001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968ac0 is same with the state(6) to be set 00:44:23.593 [2024-11-20 14:05:20.667007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968ac0 is same with the state(6) to be set 00:44:23.593 [2024-11-20 14:05:20.667024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968ac0 is same with the state(6) to be set 00:44:23.593 [2024-11-20 14:05:20.667033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968ac0 is same with the state(6) to be set 00:44:23.593 [2024-11-20 14:05:20.667040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968ac0 is same with the state(6) to be set 00:44:23.593 [2024-11-20 14:05:20.667048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968ac0 is same with the state(6) to be set 00:44:23.593 [2024-11-20 14:05:20.667056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968ac0 is same with the state(6) to be set 00:44:23.593 [2024-11-20 14:05:20.667064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968ac0 is same with the state(6) to be set 00:44:23.593 [2024-11-20 14:05:20.667071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968ac0 is same with the state(6) to be set 00:44:23.593 [2024-11-20 14:05:20.667079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968ac0 is same with the state(6) to be set 00:44:23.593 [2024-11-20 14:05:20.667087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968ac0 is same with the state(6) to be set 00:44:23.593 [2024-11-20 14:05:20.667095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968ac0 is same with the state(6) to be set 00:44:23.593 [2024-11-20 14:05:20.667105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968ac0 is same with the state(6) to be set 00:44:23.593 [2024-11-20 14:05:20.667113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968ac0 is same with the state(6) to be set 00:44:23.593 [2024-11-20 14:05:20.667122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968ac0 is same with the state(6) to be set 00:44:23.593 [2024-11-20 14:05:20.667128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968ac0 is same with the state(6) to be set 00:44:23.593 [2024-11-20 14:05:20.667134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968ac0 is same with the state(6) to be set 00:44:23.593 [2024-11-20 14:05:20.667140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968ac0 is same with the state(6) to be set 00:44:23.593 [2024-11-20 14:05:20.667146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968ac0 is same with the state(6) to be set 00:44:23.593 [2024-11-20 14:05:20.667151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968ac0 is same with the state(6) to be set 00:44:23.593 [2024-11-20 14:05:20.667157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968ac0 is same with the state(6) to be set 00:44:23.593 [2024-11-20 14:05:20.667163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968ac0 is same with the state(6) to be set 00:44:23.593 [2024-11-20 14:05:20.667172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968ac0 is same with the state(6) to be set 00:44:23.593 [2024-11-20 14:05:20.667178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968ac0 is same with the state(6) to be set 00:44:23.593 [2024-11-20 14:05:20.667183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968ac0 is same with the state(6) to be set 00:44:23.593 [2024-11-20 14:05:20.667189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968ac0 is same with the state(6) to be set 00:44:23.593 [2024-11-20 14:05:20.667194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968ac0 is same with the state(6) to be set 00:44:23.593 [2024-11-20 14:05:20.667199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968ac0 is same with the state(6) to be set 00:44:23.593 [2024-11-20 14:05:20.667205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968ac0 is same with the state(6) to be set 00:44:23.593 [2024-11-20 14:05:20.667211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968ac0 is same with the state(6) to be set 00:44:23.593 [2024-11-20 14:05:20.667217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968ac0 is same with the state(6) to be set 00:44:23.593 [2024-11-20 14:05:20.667222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968ac0 is same with the state(6) to be set 00:44:23.593 [2024-11-20 14:05:20.667227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968ac0 is same with the state(6) to be set 00:44:23.593 [2024-11-20 14:05:20.667233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968ac0 is same with the state(6) to be set 00:44:23.593 [2024-11-20 14:05:20.667238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968ac0 is same with the state(6) to be set 00:44:23.593 [2024-11-20 14:05:20.667243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968ac0 is same with the state(6) to be set 00:44:23.593 [2024-11-20 14:05:20.667249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968ac0 is same with the state(6) to be set 00:44:23.593 [2024-11-20 14:05:20.667255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968ac0 is same with the state(6) to be set 00:44:23.593 [2024-11-20 14:05:20.667260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968ac0 is same with the state(6) to be set 00:44:23.593 [2024-11-20 14:05:20.667266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968ac0 is same with the state(6) to be set 00:44:23.593 [2024-11-20 14:05:20.667271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968ac0 is same with the state(6) to be set 00:44:23.593 [2024-11-20 14:05:20.667276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968ac0 is same with the state(6) to be set 00:44:23.593 [2024-11-20 14:05:20.667281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968ac0 is same with the state(6) to be set 00:44:23.593 [2024-11-20 14:05:20.667286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968ac0 is same with the state(6) to be set 00:44:23.593 [2024-11-20 14:05:20.667292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968ac0 is same with the state(6) to be set 00:44:23.593 [2024-11-20 14:05:20.667299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968ac0 is same with the state(6) to be set 00:44:23.593 [2024-11-20 14:05:20.667304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968ac0 is same with the state(6) to be set 00:44:23.593 [2024-11-20 14:05:20.667309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968ac0 is same with the state(6) to be set 00:44:23.593 [2024-11-20 14:05:20.667315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x968ac0 is same with the state(6) to be set 00:44:23.593 [2024-11-20 14:05:20.667382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:93296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.593 [2024-11-20 14:05:20.667468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.593 [2024-11-20 14:05:20.667493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:1056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.594 [2024-11-20 14:05:20.667501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.594 [2024-11-20 14:05:20.667510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:101248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.594 [2024-11-20 14:05:20.667516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.594 [2024-11-20 14:05:20.667524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:43752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.594 [2024-11-20 14:05:20.667530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.594 [2024-11-20 14:05:20.667536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:72424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.594 [2024-11-20 14:05:20.667542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.594 [2024-11-20 14:05:20.667548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:124776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.594 [2024-11-20 14:05:20.667554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.594 [2024-11-20 14:05:20.667563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:30832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.594 [2024-11-20 14:05:20.667568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.594 [2024-11-20 14:05:20.667577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:95560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.594 [2024-11-20 14:05:20.667583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.594 [2024-11-20 14:05:20.667590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:126648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.594 [2024-11-20 14:05:20.667595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.594 [2024-11-20 14:05:20.667602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.594 [2024-11-20 14:05:20.667607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.594 [2024-11-20 14:05:20.667613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:130488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.594 [2024-11-20 14:05:20.667621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.594 [2024-11-20 14:05:20.667628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:79496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.594 [2024-11-20 14:05:20.667633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.594 [2024-11-20 14:05:20.667639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.594 [2024-11-20 14:05:20.667644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.594 [2024-11-20 14:05:20.667652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:84400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.594 [2024-11-20 14:05:20.667657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.594 [2024-11-20 14:05:20.667666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:68424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.594 [2024-11-20 14:05:20.667672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.594 [2024-11-20 14:05:20.667679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:89752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.594 [2024-11-20 14:05:20.667684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.594 [2024-11-20 14:05:20.667690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.594 [2024-11-20 14:05:20.667701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.594 [2024-11-20 14:05:20.667716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:11376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.594 [2024-11-20 14:05:20.667723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.594 [2024-11-20 14:05:20.667729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:120496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.594 [2024-11-20 14:05:20.667735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.594 [2024-11-20 14:05:20.667742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:5856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.594 [2024-11-20 14:05:20.667747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.594 [2024-11-20 14:05:20.667755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:28288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.594 [2024-11-20 14:05:20.667761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.594 [2024-11-20 14:05:20.667768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:101040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.594 [2024-11-20 14:05:20.667773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.594 [2024-11-20 14:05:20.667779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:35568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.594 [2024-11-20 14:05:20.667785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.594 [2024-11-20 14:05:20.667791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:62008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.594 [2024-11-20 14:05:20.667799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.594 [2024-11-20 14:05:20.667806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:53088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.594 [2024-11-20 14:05:20.667812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.594 [2024-11-20 14:05:20.667819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:25088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.594 [2024-11-20 14:05:20.667824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.594 [2024-11-20 14:05:20.667831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:112776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.594 [2024-11-20 14:05:20.667836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.594 [2024-11-20 14:05:20.667845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:122120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.594 [2024-11-20 14:05:20.667851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.594 [2024-11-20 14:05:20.667858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:64760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.594 [2024-11-20 14:05:20.667863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.594 [2024-11-20 14:05:20.667869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.594 [2024-11-20 14:05:20.667874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.594 [2024-11-20 14:05:20.667883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:75144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.594 [2024-11-20 14:05:20.667888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.594 [2024-11-20 14:05:20.667896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:44352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.594 [2024-11-20 14:05:20.667901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.594 [2024-11-20 14:05:20.667908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:122360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.594 [2024-11-20 14:05:20.667918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.594 [2024-11-20 14:05:20.667925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:100856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.594 [2024-11-20 14:05:20.667930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.594 [2024-11-20 14:05:20.667939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:105832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.594 [2024-11-20 14:05:20.667945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.594 [2024-11-20 14:05:20.667953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:78376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.594 [2024-11-20 14:05:20.667958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.594 [2024-11-20 14:05:20.667965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:83640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.594 [2024-11-20 14:05:20.667971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.594 [2024-11-20 14:05:20.667978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:89440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.594 [2024-11-20 14:05:20.667983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.594 [2024-11-20 14:05:20.667992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:90416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.594 [2024-11-20 14:05:20.667997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.594 [2024-11-20 14:05:20.668007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:83144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.594 [2024-11-20 14:05:20.668012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.594 [2024-11-20 14:05:20.668019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.594 [2024-11-20 14:05:20.668025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.595 [2024-11-20 14:05:20.668031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:3968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.595 [2024-11-20 14:05:20.668036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.595 [2024-11-20 14:05:20.668046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:30568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.595 [2024-11-20 14:05:20.668052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.595 [2024-11-20 14:05:20.668058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:38752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.595 [2024-11-20 14:05:20.668063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.595 [2024-11-20 14:05:20.668070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:25576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.595 [2024-11-20 14:05:20.668075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.595 [2024-11-20 14:05:20.668082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:124256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.595 [2024-11-20 14:05:20.668090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.595 [2024-11-20 14:05:20.668097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:105488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.595 [2024-11-20 14:05:20.668103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.595 [2024-11-20 14:05:20.668109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:58864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.595 [2024-11-20 14:05:20.668115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.595 [2024-11-20 14:05:20.668122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:37984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.595 [2024-11-20 14:05:20.668131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.595 [2024-11-20 14:05:20.668139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.595 [2024-11-20 14:05:20.668144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.595 [2024-11-20 14:05:20.668151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:53416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.595 [2024-11-20 14:05:20.668155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.595 [2024-11-20 14:05:20.668162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.595 [2024-11-20 14:05:20.668169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.595 [2024-11-20 14:05:20.668176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:33952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.595 [2024-11-20 14:05:20.668182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.595 [2024-11-20 14:05:20.668189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:88216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.595 [2024-11-20 14:05:20.668195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.595 [2024-11-20 14:05:20.668201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:35896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.595 [2024-11-20 14:05:20.668206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.595 [2024-11-20 14:05:20.668213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.595 [2024-11-20 14:05:20.668219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.595 [2024-11-20 14:05:20.668238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:72800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.595 [2024-11-20 14:05:20.668244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.595 [2024-11-20 14:05:20.668253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:38832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.595 [2024-11-20 14:05:20.668258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.595 [2024-11-20 14:05:20.668265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:68424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.595 [2024-11-20 14:05:20.668271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.595 [2024-11-20 14:05:20.668277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:52880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.595 [2024-11-20 14:05:20.668282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.595 [2024-11-20 14:05:20.668292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:119032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.595 [2024-11-20 14:05:20.668298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.595 [2024-11-20 14:05:20.668304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:32296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.595 [2024-11-20 14:05:20.668309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.595 [2024-11-20 14:05:20.668316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:40920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.595 [2024-11-20 14:05:20.668320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.595 [2024-11-20 14:05:20.668329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:111040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.595 [2024-11-20 14:05:20.668335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.595 [2024-11-20 14:05:20.668341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:21792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.595 [2024-11-20 14:05:20.668348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.595 [2024-11-20 14:05:20.668354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:2216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.595 [2024-11-20 14:05:20.668360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.595 [2024-11-20 14:05:20.668369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:98664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.595 [2024-11-20 14:05:20.668375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.595 [2024-11-20 14:05:20.668382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:64208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.595 [2024-11-20 14:05:20.668387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.595 [2024-11-20 14:05:20.668394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:92456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.595 [2024-11-20 14:05:20.668399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.595 [2024-11-20 14:05:20.668408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:111936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.595 [2024-11-20 14:05:20.668414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.595 [2024-11-20 14:05:20.668421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:7432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.595 [2024-11-20 14:05:20.668426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.595 [2024-11-20 14:05:20.668433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:126616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.595 [2024-11-20 14:05:20.668438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.595 [2024-11-20 14:05:20.668447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:43704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.595 [2024-11-20 14:05:20.668452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.595 [2024-11-20 14:05:20.668459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:29600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.595 [2024-11-20 14:05:20.668464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.595 [2024-11-20 14:05:20.668470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:58672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.595 [2024-11-20 14:05:20.668476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.595 [2024-11-20 14:05:20.668483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:52120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.595 [2024-11-20 14:05:20.668490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.595 [2024-11-20 14:05:20.668497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:99640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.595 [2024-11-20 14:05:20.668503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.595 [2024-11-20 14:05:20.668510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:87280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.595 [2024-11-20 14:05:20.668515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.595 [2024-11-20 14:05:20.668522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.595 [2024-11-20 14:05:20.668530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.595 [2024-11-20 14:05:20.668537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:88480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.596 [2024-11-20 14:05:20.668542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.596 [2024-11-20 14:05:20.668549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:65872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.596 [2024-11-20 14:05:20.668556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.596 [2024-11-20 14:05:20.668562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:72656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.596 [2024-11-20 14:05:20.668570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.596 [2024-11-20 14:05:20.668576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:114864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.596 [2024-11-20 14:05:20.668582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.596 [2024-11-20 14:05:20.668589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:80416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.596 [2024-11-20 14:05:20.668595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.596 [2024-11-20 14:05:20.668604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:38224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.596 [2024-11-20 14:05:20.668609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.596 [2024-11-20 14:05:20.668616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:11352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.596 [2024-11-20 14:05:20.668621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.596 [2024-11-20 14:05:20.668628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:25760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.596 [2024-11-20 14:05:20.668633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.596 [2024-11-20 14:05:20.668639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:14312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.596 [2024-11-20 14:05:20.668644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.596 [2024-11-20 14:05:20.668653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:87296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.596 [2024-11-20 14:05:20.668658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.596 [2024-11-20 14:05:20.668665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:105544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.596 [2024-11-20 14:05:20.668670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.596 [2024-11-20 14:05:20.668676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.596 [2024-11-20 14:05:20.668682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.596 [2024-11-20 14:05:20.668691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:30584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.596 [2024-11-20 14:05:20.668696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.596 [2024-11-20 14:05:20.668702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:57752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.596 [2024-11-20 14:05:20.668714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.596 [2024-11-20 14:05:20.668721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:33448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.596 [2024-11-20 14:05:20.668726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.596 [2024-11-20 14:05:20.668732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:119616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.596 [2024-11-20 14:05:20.668737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.596 [2024-11-20 14:05:20.668744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:100000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.596 [2024-11-20 14:05:20.668749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.596 [2024-11-20 14:05:20.668756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:31200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.596 [2024-11-20 14:05:20.668783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.596 [2024-11-20 14:05:20.668791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:84008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.596 [2024-11-20 14:05:20.668796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.596 [2024-11-20 14:05:20.668803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:81800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.596 [2024-11-20 14:05:20.668808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.596 [2024-11-20 14:05:20.668815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:85192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.596 [2024-11-20 14:05:20.668830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.596 [2024-11-20 14:05:20.668838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:92712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.596 [2024-11-20 14:05:20.668844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.596 [2024-11-20 14:05:20.668852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:63216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.596 [2024-11-20 14:05:20.668860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.596 [2024-11-20 14:05:20.668867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:62456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.596 [2024-11-20 14:05:20.668873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.596 [2024-11-20 14:05:20.668880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:96976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.596 [2024-11-20 14:05:20.668886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.596 [2024-11-20 14:05:20.668893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:64544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.596 [2024-11-20 14:05:20.668902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.596 [2024-11-20 14:05:20.668909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:10904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.596 [2024-11-20 14:05:20.668915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.596 [2024-11-20 14:05:20.668923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:77592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.596 [2024-11-20 14:05:20.668928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.596 [2024-11-20 14:05:20.668938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:129832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.596 [2024-11-20 14:05:20.668943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.596 [2024-11-20 14:05:20.668951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:34592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.596 [2024-11-20 14:05:20.668956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.596 [2024-11-20 14:05:20.668963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:122968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.596 [2024-11-20 14:05:20.668968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.596 [2024-11-20 14:05:20.668975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:17632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.596 [2024-11-20 14:05:20.668984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.596 [2024-11-20 14:05:20.668991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:33200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.596 [2024-11-20 14:05:20.668996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.596 [2024-11-20 14:05:20.669003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:73688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.596 [2024-11-20 14:05:20.669011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.596 [2024-11-20 14:05:20.669021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:50880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.596 [2024-11-20 14:05:20.669027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.596 [2024-11-20 14:05:20.669034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:28384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.596 [2024-11-20 14:05:20.669039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.596 [2024-11-20 14:05:20.669047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:72640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.596 [2024-11-20 14:05:20.669054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.596 [2024-11-20 14:05:20.669061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:26784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.596 [2024-11-20 14:05:20.669085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.596 [2024-11-20 14:05:20.669099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:118264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.596 [2024-11-20 14:05:20.669105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.596 [2024-11-20 14:05:20.669115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:130080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.596 [2024-11-20 14:05:20.669121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.597 [2024-11-20 14:05:20.669128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:81992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.597 [2024-11-20 14:05:20.669134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.597 [2024-11-20 14:05:20.669141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:106800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.597 [2024-11-20 14:05:20.669147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.597 [2024-11-20 14:05:20.669155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:112552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.597 [2024-11-20 14:05:20.669160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.597 [2024-11-20 14:05:20.669169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:98576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.597 [2024-11-20 14:05:20.669175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.597 [2024-11-20 14:05:20.669182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:73640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.597 [2024-11-20 14:05:20.669188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.597 [2024-11-20 14:05:20.669194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:120520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.597 [2024-11-20 14:05:20.669202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.597 [2024-11-20 14:05:20.669210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:75512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.597 [2024-11-20 14:05:20.669216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.597 [2024-11-20 14:05:20.669223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:44:23.597 [2024-11-20 14:05:20.669228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.597 [2024-11-20 14:05:20.669234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118fe20 is same with the state(6) to be set 00:44:23.597 [2024-11-20 14:05:20.669245] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:44:23.597 [2024-11-20 14:05:20.669250] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:23.597 [2024-11-20 14:05:20.669257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5072 len:8 PRP1 0x0 PRP2 0x0 00:44:23.597 [2024-11-20 14:05:20.669263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:23.597 [2024-11-20 14:05:20.669571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:44:23.597 [2024-11-20 14:05:20.669650] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1122e50 (9): Bad file descriptor 00:44:23.597 [2024-11-20 14:05:20.669761] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:44:23.597 [2024-11-20 14:05:20.669779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1122e50 with addr=10.0.0.3, port=4420 00:44:23.597 [2024-11-20 14:05:20.669790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1122e50 is same with the state(6) to be set 00:44:23.597 [2024-11-20 14:05:20.669803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1122e50 (9): Bad file descriptor 00:44:23.597 [2024-11-20 14:05:20.669814] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:44:23.597 [2024-11-20 14:05:20.669821] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:44:23.597 [2024-11-20 14:05:20.669830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:44:23.597 [2024-11-20 14:05:20.669838] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:44:23.597 [2024-11-20 14:05:20.669844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:44:23.597 14:05:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 82577 00:44:25.469 9498.50 IOPS, 37.10 MiB/s [2024-11-20T14:05:22.792Z] 6332.33 IOPS, 24.74 MiB/s [2024-11-20T14:05:22.792Z] [2024-11-20 14:05:22.666275] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:44:25.469 [2024-11-20 14:05:22.666350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1122e50 with addr=10.0.0.3, port=4420 00:44:25.469 [2024-11-20 14:05:22.666361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1122e50 is same with the state(6) to be set 00:44:25.469 [2024-11-20 14:05:22.666382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1122e50 (9): Bad file descriptor 00:44:25.469 [2024-11-20 14:05:22.666396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:44:25.469 [2024-11-20 14:05:22.666402] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:44:25.469 [2024-11-20 14:05:22.666411] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:44:25.469 [2024-11-20 14:05:22.666420] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:44:25.469 [2024-11-20 14:05:22.666428] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:44:27.343 4749.25 IOPS, 18.55 MiB/s [2024-11-20T14:05:24.666Z] 3799.40 IOPS, 14.84 MiB/s [2024-11-20T14:05:24.666Z] [2024-11-20 14:05:24.662824] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:44:27.343 [2024-11-20 14:05:24.662904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1122e50 with addr=10.0.0.3, port=4420 00:44:27.343 [2024-11-20 14:05:24.662917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1122e50 is same with the state(6) to be set 00:44:27.343 [2024-11-20 14:05:24.662938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1122e50 (9): Bad file descriptor 00:44:27.343 [2024-11-20 14:05:24.662952] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:44:27.343 [2024-11-20 14:05:24.662961] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:44:27.343 [2024-11-20 14:05:24.662969] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:44:27.343 [2024-11-20 14:05:24.662979] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:44:27.343 [2024-11-20 14:05:24.662988] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:44:29.662 3166.17 IOPS, 12.37 MiB/s [2024-11-20T14:05:26.985Z] 2713.86 IOPS, 10.60 MiB/s [2024-11-20T14:05:26.985Z] [2024-11-20 14:05:26.659236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:44:29.662 [2024-11-20 14:05:26.659309] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:44:29.662 [2024-11-20 14:05:26.659335] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:44:29.662 [2024-11-20 14:05:26.659344] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:44:29.662 [2024-11-20 14:05:26.659354] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:44:30.601 2374.62 IOPS, 9.28 MiB/s 00:44:30.601 Latency(us) 00:44:30.601 [2024-11-20T14:05:27.924Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:30.601 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:44:30.601 NVMe0n1 : 8.10 2344.77 9.16 15.80 0.00 54293.32 1173.35 7033243.39 00:44:30.601 [2024-11-20T14:05:27.924Z] =================================================================================================================== 00:44:30.601 [2024-11-20T14:05:27.924Z] Total : 2344.77 9.16 15.80 0.00 54293.32 1173.35 7033243.39 00:44:30.601 { 00:44:30.601 "results": [ 00:44:30.601 { 00:44:30.601 "job": "NVMe0n1", 00:44:30.601 "core_mask": "0x4", 00:44:30.601 "workload": "randread", 00:44:30.601 "status": "finished", 00:44:30.601 "queue_depth": 128, 00:44:30.601 "io_size": 4096, 00:44:30.601 "runtime": 8.101853, 00:44:30.601 "iops": 2344.7722391408483, 00:44:30.601 "mibps": 9.159266559143939, 00:44:30.601 "io_failed": 128, 00:44:30.601 "io_timeout": 0, 00:44:30.601 "avg_latency_us": 54293.318559465704, 00:44:30.601 "min_latency_us": 1173.351965065502, 00:44:30.601 "max_latency_us": 7033243.388646288 00:44:30.601 } 00:44:30.601 ], 00:44:30.602 "core_count": 1 00:44:30.602 } 00:44:30.602 14:05:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:44:30.602 Attaching 5 probes... 00:44:30.602 1176.473815: reset bdev controller NVMe0 00:44:30.602 1176.592346: reconnect bdev controller NVMe0 00:44:30.602 3173.020685: reconnect delay bdev controller NVMe0 00:44:30.602 3173.048436: reconnect bdev controller NVMe0 00:44:30.602 5169.566274: reconnect delay bdev controller NVMe0 00:44:30.602 5169.596049: reconnect bdev controller NVMe0 00:44:30.602 7166.114964: reconnect delay bdev controller NVMe0 00:44:30.602 7166.143444: reconnect bdev controller NVMe0 00:44:30.602 14:05:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:44:30.602 14:05:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:44:30.602 14:05:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 82536 00:44:30.602 14:05:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:44:30.602 14:05:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 82524 00:44:30.602 14:05:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82524 ']' 00:44:30.602 14:05:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82524 00:44:30.602 14:05:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:44:30.602 14:05:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:30.602 14:05:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82524 00:44:30.602 killing process with pid 82524 00:44:30.602 Received shutdown signal, test time was about 8.196983 seconds 00:44:30.602 00:44:30.602 Latency(us) 00:44:30.602 [2024-11-20T14:05:27.925Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:30.602 [2024-11-20T14:05:27.925Z] =================================================================================================================== 00:44:30.602 [2024-11-20T14:05:27.925Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:44:30.602 14:05:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:44:30.602 14:05:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:44:30.602 14:05:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82524' 00:44:30.602 14:05:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82524 00:44:30.602 14:05:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82524 00:44:30.862 14:05:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:44:31.122 14:05:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:44:31.122 14:05:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:44:31.122 14:05:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:44:31.122 14:05:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:44:31.122 14:05:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:31.122 14:05:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:44:31.122 14:05:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:31.122 14:05:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:31.122 rmmod nvme_tcp 00:44:31.122 rmmod nvme_fabrics 00:44:31.122 rmmod nvme_keyring 00:44:31.122 14:05:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:31.122 14:05:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:44:31.122 14:05:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:44:31.122 14:05:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 82082 ']' 00:44:31.122 14:05:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 82082 00:44:31.122 14:05:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82082 ']' 00:44:31.122 14:05:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82082 00:44:31.122 14:05:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:44:31.122 14:05:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:31.122 14:05:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82082 00:44:31.122 14:05:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:31.122 14:05:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:31.122 14:05:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82082' 00:44:31.122 killing process with pid 82082 00:44:31.122 14:05:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82082 00:44:31.122 14:05:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82082 00:44:31.382 14:05:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:44:31.382 14:05:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:44:31.382 14:05:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:44:31.382 14:05:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:44:31.382 14:05:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:44:31.382 14:05:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:44:31.382 14:05:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:44:31.382 14:05:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:31.382 14:05:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:44:31.382 14:05:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:44:31.382 14:05:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:44:31.382 14:05:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:44:31.382 14:05:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:44:31.382 14:05:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:44:31.382 14:05:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:44:31.382 14:05:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:44:31.649 14:05:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:44:31.649 14:05:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:44:31.649 14:05:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:44:31.649 14:05:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:44:31.649 14:05:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:44:31.649 14:05:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:44:31.649 14:05:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:44:31.649 14:05:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:31.649 14:05:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:44:31.649 14:05:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:31.649 14:05:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:44:31.649 ************************************ 00:44:31.649 END TEST nvmf_timeout 00:44:31.649 ************************************ 00:44:31.649 00:44:31.649 real 0m47.710s 00:44:31.649 user 2m18.339s 00:44:31.649 sys 0m6.243s 00:44:31.649 14:05:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:31.649 14:05:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:44:31.649 14:05:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:44:31.649 14:05:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:44:31.649 00:44:31.649 real 5m10.975s 00:44:31.649 user 13m13.940s 00:44:31.649 sys 1m12.099s 00:44:31.649 14:05:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:31.649 14:05:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:44:31.649 ************************************ 00:44:31.649 END TEST nvmf_host 00:44:31.649 ************************************ 00:44:31.919 14:05:29 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:44:31.919 14:05:29 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:44:31.919 00:44:31.919 real 12m29.415s 00:44:31.919 user 29m25.087s 00:44:31.919 sys 3m8.991s 00:44:31.919 14:05:29 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:31.919 14:05:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:31.919 ************************************ 00:44:31.919 END TEST nvmf_tcp 00:44:31.919 ************************************ 00:44:31.919 14:05:29 -- spdk/autotest.sh@285 -- # [[ 1 -eq 0 ]] 00:44:31.919 14:05:29 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:44:31.919 14:05:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:31.919 14:05:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:31.919 14:05:29 -- common/autotest_common.sh@10 -- # set +x 00:44:31.919 ************************************ 00:44:31.919 START TEST nvmf_dif 00:44:31.919 ************************************ 00:44:31.919 14:05:29 nvmf_dif -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:44:31.919 * Looking for test storage... 00:44:31.919 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:44:31.919 14:05:29 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:44:31.919 14:05:29 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:44:31.919 14:05:29 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:44:32.178 14:05:29 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:44:32.178 14:05:29 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:32.178 14:05:29 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:32.178 14:05:29 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:32.178 14:05:29 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:44:32.178 14:05:29 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:44:32.178 14:05:29 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:44:32.178 14:05:29 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:44:32.178 14:05:29 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:44:32.178 14:05:29 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:44:32.178 14:05:29 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:44:32.178 14:05:29 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:32.178 14:05:29 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:44:32.178 14:05:29 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:44:32.178 14:05:29 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:32.178 14:05:29 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:32.178 14:05:29 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:44:32.178 14:05:29 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:44:32.178 14:05:29 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:32.178 14:05:29 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:44:32.178 14:05:29 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:44:32.178 14:05:29 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:44:32.178 14:05:29 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:44:32.178 14:05:29 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:32.178 14:05:29 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:44:32.178 14:05:29 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:44:32.178 14:05:29 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:32.178 14:05:29 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:32.178 14:05:29 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:44:32.178 14:05:29 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:32.178 14:05:29 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:44:32.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:32.178 --rc genhtml_branch_coverage=1 00:44:32.178 --rc genhtml_function_coverage=1 00:44:32.178 --rc genhtml_legend=1 00:44:32.178 --rc geninfo_all_blocks=1 00:44:32.178 --rc geninfo_unexecuted_blocks=1 00:44:32.178 00:44:32.178 ' 00:44:32.178 14:05:29 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:44:32.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:32.178 --rc genhtml_branch_coverage=1 00:44:32.178 --rc genhtml_function_coverage=1 00:44:32.178 --rc genhtml_legend=1 00:44:32.178 --rc geninfo_all_blocks=1 00:44:32.178 --rc geninfo_unexecuted_blocks=1 00:44:32.178 00:44:32.178 ' 00:44:32.178 14:05:29 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:44:32.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:32.178 --rc genhtml_branch_coverage=1 00:44:32.178 --rc genhtml_function_coverage=1 00:44:32.178 --rc genhtml_legend=1 00:44:32.178 --rc geninfo_all_blocks=1 00:44:32.178 --rc geninfo_unexecuted_blocks=1 00:44:32.178 00:44:32.178 ' 00:44:32.178 14:05:29 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:44:32.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:32.178 --rc genhtml_branch_coverage=1 00:44:32.178 --rc genhtml_function_coverage=1 00:44:32.178 --rc genhtml_legend=1 00:44:32.178 --rc geninfo_all_blocks=1 00:44:32.178 --rc geninfo_unexecuted_blocks=1 00:44:32.178 00:44:32.178 ' 00:44:32.178 14:05:29 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:44:32.178 14:05:29 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:44:32.178 14:05:29 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:32.178 14:05:29 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:32.178 14:05:29 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:32.178 14:05:29 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:32.178 14:05:29 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:32.178 14:05:29 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:32.178 14:05:29 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:32.178 14:05:29 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:32.178 14:05:29 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:32.178 14:05:29 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:32.178 14:05:29 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:44:32.178 14:05:29 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=105ec898-1662-46bd-85be-b241e399edb9 00:44:32.178 14:05:29 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:32.178 14:05:29 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:32.178 14:05:29 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:44:32.178 14:05:29 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:32.178 14:05:29 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:44:32.178 14:05:29 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:44:32.178 14:05:29 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:32.178 14:05:29 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:32.178 14:05:29 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:32.178 14:05:29 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:32.178 14:05:29 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:32.178 14:05:29 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:32.178 14:05:29 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:44:32.178 14:05:29 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:32.178 14:05:29 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:44:32.178 14:05:29 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:32.178 14:05:29 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:32.178 14:05:29 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:32.178 14:05:29 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:32.178 14:05:29 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:32.178 14:05:29 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:32.178 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:32.178 14:05:29 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:32.178 14:05:29 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:32.178 14:05:29 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:32.178 14:05:29 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:44:32.178 14:05:29 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:44:32.178 14:05:29 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:44:32.178 14:05:29 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:44:32.178 14:05:29 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:44:32.178 14:05:29 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:44:32.178 14:05:29 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:32.178 14:05:29 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:44:32.179 14:05:29 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:44:32.179 14:05:29 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:44:32.179 14:05:29 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:32.179 14:05:29 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:32.179 14:05:29 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:32.179 14:05:29 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:44:32.179 14:05:29 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:44:32.179 14:05:29 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:44:32.179 14:05:29 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:44:32.179 14:05:29 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:44:32.179 14:05:29 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:44:32.179 14:05:29 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:32.179 14:05:29 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:44:32.179 14:05:29 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:44:32.179 14:05:29 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:44:32.179 14:05:29 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:32.179 14:05:29 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:44:32.179 14:05:29 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:44:32.179 14:05:29 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:44:32.179 14:05:29 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:44:32.179 14:05:29 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:44:32.179 14:05:29 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:44:32.179 14:05:29 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:32.179 14:05:29 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:44:32.179 14:05:29 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:44:32.179 14:05:29 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:44:32.179 14:05:29 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:44:32.179 14:05:29 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:44:32.179 Cannot find device "nvmf_init_br" 00:44:32.179 14:05:29 nvmf_dif -- nvmf/common.sh@162 -- # true 00:44:32.179 14:05:29 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:44:32.179 Cannot find device "nvmf_init_br2" 00:44:32.179 14:05:29 nvmf_dif -- nvmf/common.sh@163 -- # true 00:44:32.179 14:05:29 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:44:32.179 Cannot find device "nvmf_tgt_br" 00:44:32.179 14:05:29 nvmf_dif -- nvmf/common.sh@164 -- # true 00:44:32.179 14:05:29 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:44:32.179 Cannot find device "nvmf_tgt_br2" 00:44:32.179 14:05:29 nvmf_dif -- nvmf/common.sh@165 -- # true 00:44:32.179 14:05:29 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:44:32.179 Cannot find device "nvmf_init_br" 00:44:32.179 14:05:29 nvmf_dif -- nvmf/common.sh@166 -- # true 00:44:32.179 14:05:29 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:44:32.179 Cannot find device "nvmf_init_br2" 00:44:32.179 14:05:29 nvmf_dif -- nvmf/common.sh@167 -- # true 00:44:32.179 14:05:29 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:44:32.179 Cannot find device "nvmf_tgt_br" 00:44:32.179 14:05:29 nvmf_dif -- nvmf/common.sh@168 -- # true 00:44:32.179 14:05:29 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:44:32.179 Cannot find device "nvmf_tgt_br2" 00:44:32.179 14:05:29 nvmf_dif -- nvmf/common.sh@169 -- # true 00:44:32.179 14:05:29 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:44:32.439 Cannot find device "nvmf_br" 00:44:32.439 14:05:29 nvmf_dif -- nvmf/common.sh@170 -- # true 00:44:32.439 14:05:29 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:44:32.439 Cannot find device "nvmf_init_if" 00:44:32.439 14:05:29 nvmf_dif -- nvmf/common.sh@171 -- # true 00:44:32.439 14:05:29 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:44:32.439 Cannot find device "nvmf_init_if2" 00:44:32.439 14:05:29 nvmf_dif -- nvmf/common.sh@172 -- # true 00:44:32.439 14:05:29 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:44:32.439 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:44:32.439 14:05:29 nvmf_dif -- nvmf/common.sh@173 -- # true 00:44:32.439 14:05:29 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:44:32.439 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:44:32.439 14:05:29 nvmf_dif -- nvmf/common.sh@174 -- # true 00:44:32.439 14:05:29 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:44:32.439 14:05:29 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:44:32.439 14:05:29 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:44:32.439 14:05:29 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:44:32.439 14:05:29 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:44:32.439 14:05:29 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:44:32.439 14:05:29 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:44:32.439 14:05:29 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:44:32.439 14:05:29 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:44:32.439 14:05:29 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:44:32.439 14:05:29 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:44:32.439 14:05:29 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:44:32.439 14:05:29 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:44:32.439 14:05:29 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:44:32.439 14:05:29 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:44:32.439 14:05:29 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:44:32.439 14:05:29 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:44:32.439 14:05:29 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:44:32.439 14:05:29 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:44:32.439 14:05:29 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:44:32.439 14:05:29 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:44:32.439 14:05:29 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:44:32.439 14:05:29 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:44:32.439 14:05:29 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:44:32.439 14:05:29 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:44:32.439 14:05:29 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:44:32.439 14:05:29 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:44:32.439 14:05:29 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:44:32.439 14:05:29 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:44:32.439 14:05:29 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:44:32.439 14:05:29 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:44:32.439 14:05:29 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:44:32.439 14:05:29 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:44:32.439 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:44:32.439 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.102 ms 00:44:32.439 00:44:32.439 --- 10.0.0.3 ping statistics --- 00:44:32.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:32.439 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:44:32.439 14:05:29 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:44:32.439 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:44:32.439 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:44:32.439 00:44:32.439 --- 10.0.0.4 ping statistics --- 00:44:32.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:32.439 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:44:32.439 14:05:29 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:44:32.439 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:32.439 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:44:32.439 00:44:32.439 --- 10.0.0.1 ping statistics --- 00:44:32.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:32.439 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:44:32.439 14:05:29 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:44:32.699 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:32.699 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:44:32.699 00:44:32.699 --- 10.0.0.2 ping statistics --- 00:44:32.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:32.699 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:44:32.699 14:05:29 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:32.699 14:05:29 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:44:32.699 14:05:29 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:44:32.699 14:05:29 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:44:32.960 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:44:33.220 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:44:33.220 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:44:33.220 14:05:30 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:33.220 14:05:30 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:44:33.220 14:05:30 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:44:33.220 14:05:30 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:33.220 14:05:30 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:44:33.220 14:05:30 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:44:33.220 14:05:30 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:44:33.220 14:05:30 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:44:33.220 14:05:30 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:44:33.220 14:05:30 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:33.220 14:05:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:33.220 14:05:30 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=83075 00:44:33.220 14:05:30 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:44:33.220 14:05:30 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 83075 00:44:33.220 14:05:30 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 83075 ']' 00:44:33.220 14:05:30 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:33.220 14:05:30 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:33.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:33.220 14:05:30 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:33.220 14:05:30 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:33.220 14:05:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:33.220 [2024-11-20 14:05:30.450425] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:44:33.220 [2024-11-20 14:05:30.450484] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:33.480 [2024-11-20 14:05:30.598291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:33.480 [2024-11-20 14:05:30.645798] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:33.480 [2024-11-20 14:05:30.645846] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:33.480 [2024-11-20 14:05:30.645853] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:33.480 [2024-11-20 14:05:30.645858] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:33.480 [2024-11-20 14:05:30.645862] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:33.480 [2024-11-20 14:05:30.646122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:33.480 [2024-11-20 14:05:30.689684] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:44:34.050 14:05:31 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:34.050 14:05:31 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:44:34.050 14:05:31 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:44:34.050 14:05:31 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:34.050 14:05:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:34.050 14:05:31 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:34.050 14:05:31 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:44:34.050 14:05:31 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:44:34.050 14:05:31 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:34.050 14:05:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:34.050 [2024-11-20 14:05:31.363172] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:34.050 14:05:31 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:34.050 14:05:31 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:44:34.050 14:05:31 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:34.050 14:05:31 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:34.050 14:05:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:34.311 ************************************ 00:44:34.311 START TEST fio_dif_1_default 00:44:34.311 ************************************ 00:44:34.311 14:05:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:44:34.311 14:05:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:44:34.311 14:05:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:44:34.311 14:05:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:44:34.311 14:05:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:44:34.311 14:05:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:44:34.311 14:05:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:44:34.311 14:05:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:34.311 14:05:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:44:34.311 bdev_null0 00:44:34.311 14:05:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:34.311 14:05:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:44:34.311 14:05:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:34.312 14:05:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:44:34.312 14:05:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:34.312 14:05:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:44:34.312 14:05:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:34.312 14:05:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:44:34.312 14:05:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:34.312 14:05:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:44:34.312 14:05:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:34.312 14:05:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:44:34.312 [2024-11-20 14:05:31.427185] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:44:34.312 14:05:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:34.312 14:05:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:44:34.312 14:05:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:44:34.312 14:05:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:44:34.312 14:05:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:44:34.312 14:05:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:44:34.312 14:05:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:44:34.312 14:05:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:34.312 14:05:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:44:34.312 { 00:44:34.312 "params": { 00:44:34.312 "name": "Nvme$subsystem", 00:44:34.312 "trtype": "$TEST_TRANSPORT", 00:44:34.312 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:34.312 "adrfam": "ipv4", 00:44:34.312 "trsvcid": "$NVMF_PORT", 00:44:34.312 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:34.312 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:34.312 "hdgst": ${hdgst:-false}, 00:44:34.312 "ddgst": ${ddgst:-false} 00:44:34.312 }, 00:44:34.312 "method": "bdev_nvme_attach_controller" 00:44:34.312 } 00:44:34.312 EOF 00:44:34.312 )") 00:44:34.312 14:05:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:34.312 14:05:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:44:34.312 14:05:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:44:34.312 14:05:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:44:34.312 14:05:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:44:34.312 14:05:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:44:34.312 14:05:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:44:34.312 14:05:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:44:34.312 14:05:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:44:34.312 14:05:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:44:34.312 14:05:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:44:34.312 14:05:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:44:34.312 14:05:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:44:34.312 14:05:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:44:34.312 14:05:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:44:34.312 14:05:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:44:34.312 14:05:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:44:34.312 14:05:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:44:34.312 14:05:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:44:34.312 14:05:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:44:34.312 "params": { 00:44:34.312 "name": "Nvme0", 00:44:34.312 "trtype": "tcp", 00:44:34.312 "traddr": "10.0.0.3", 00:44:34.312 "adrfam": "ipv4", 00:44:34.312 "trsvcid": "4420", 00:44:34.312 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:34.312 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:34.312 "hdgst": false, 00:44:34.312 "ddgst": false 00:44:34.312 }, 00:44:34.312 "method": "bdev_nvme_attach_controller" 00:44:34.312 }' 00:44:34.312 14:05:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:44:34.312 14:05:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:44:34.312 14:05:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:44:34.312 14:05:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:44:34.312 14:05:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:44:34.312 14:05:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:44:34.312 14:05:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:44:34.312 14:05:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:44:34.312 14:05:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:44:34.312 14:05:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:34.572 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:44:34.572 fio-3.35 00:44:34.572 Starting 1 thread 00:44:46.838 00:44:46.838 filename0: (groupid=0, jobs=1): err= 0: pid=83141: Wed Nov 20 14:05:42 2024 00:44:46.838 read: IOPS=10.2k, BW=39.8MiB/s (41.8MB/s)(398MiB/10001msec) 00:44:46.838 slat (nsec): min=5459, max=84056, avg=7252.47, stdev=2040.49 00:44:46.838 clat (usec): min=275, max=2455, avg=372.43, stdev=35.51 00:44:46.838 lat (usec): min=280, max=2463, avg=379.69, stdev=36.14 00:44:46.838 clat percentiles (usec): 00:44:46.838 | 1.00th=[ 297], 5.00th=[ 318], 10.00th=[ 343], 20.00th=[ 351], 00:44:46.838 | 30.00th=[ 359], 40.00th=[ 363], 50.00th=[ 371], 60.00th=[ 379], 00:44:46.838 | 70.00th=[ 388], 80.00th=[ 396], 90.00th=[ 408], 95.00th=[ 420], 00:44:46.838 | 99.00th=[ 445], 99.50th=[ 457], 99.90th=[ 603], 99.95th=[ 660], 00:44:46.838 | 99.99th=[ 1582] 00:44:46.838 bw ( KiB/s): min=39584, max=46690, per=100.00%, avg=40872.53, stdev=1504.27, samples=19 00:44:46.838 iops : min= 9896, max=11672, avg=10218.11, stdev=375.96, samples=19 00:44:46.838 lat (usec) : 500=99.73%, 750=0.25%, 1000=0.01% 00:44:46.838 lat (msec) : 2=0.01%, 4=0.01% 00:44:46.838 cpu : usr=85.89%, sys=12.75%, ctx=84, majf=0, minf=0 00:44:46.838 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:46.838 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:46.838 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:46.838 issued rwts: total=101948,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:46.838 latency : target=0, window=0, percentile=100.00%, depth=4 00:44:46.838 00:44:46.838 Run status group 0 (all jobs): 00:44:46.838 READ: bw=39.8MiB/s (41.8MB/s), 39.8MiB/s-39.8MiB/s (41.8MB/s-41.8MB/s), io=398MiB (418MB), run=10001-10001msec 00:44:46.838 14:05:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:44:46.838 14:05:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:44:46.838 14:05:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:44:46.838 14:05:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:44:46.838 14:05:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:44:46.838 14:05:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:44:46.838 14:05:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:46.838 14:05:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:44:46.838 14:05:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:46.838 14:05:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:46.839 00:44:46.839 real 0m11.078s 00:44:46.839 user 0m9.283s 00:44:46.839 sys 0m1.633s 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:44:46.839 ************************************ 00:44:46.839 END TEST fio_dif_1_default 00:44:46.839 ************************************ 00:44:46.839 14:05:42 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:44:46.839 14:05:42 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:46.839 14:05:42 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:46.839 14:05:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:46.839 ************************************ 00:44:46.839 START TEST fio_dif_1_multi_subsystems 00:44:46.839 ************************************ 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:46.839 bdev_null0 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:46.839 [2024-11-20 14:05:42.573160] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:46.839 bdev_null1 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:44:46.839 { 00:44:46.839 "params": { 00:44:46.839 "name": "Nvme$subsystem", 00:44:46.839 "trtype": "$TEST_TRANSPORT", 00:44:46.839 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:46.839 "adrfam": "ipv4", 00:44:46.839 "trsvcid": "$NVMF_PORT", 00:44:46.839 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:46.839 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:46.839 "hdgst": ${hdgst:-false}, 00:44:46.839 "ddgst": ${ddgst:-false} 00:44:46.839 }, 00:44:46.839 "method": "bdev_nvme_attach_controller" 00:44:46.839 } 00:44:46.839 EOF 00:44:46.839 )") 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:44:46.839 { 00:44:46.839 "params": { 00:44:46.839 "name": "Nvme$subsystem", 00:44:46.839 "trtype": "$TEST_TRANSPORT", 00:44:46.839 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:46.839 "adrfam": "ipv4", 00:44:46.839 "trsvcid": "$NVMF_PORT", 00:44:46.839 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:46.839 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:46.839 "hdgst": ${hdgst:-false}, 00:44:46.839 "ddgst": ${ddgst:-false} 00:44:46.839 }, 00:44:46.839 "method": "bdev_nvme_attach_controller" 00:44:46.839 } 00:44:46.839 EOF 00:44:46.839 )") 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:44:46.839 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:44:46.839 "params": { 00:44:46.839 "name": "Nvme0", 00:44:46.839 "trtype": "tcp", 00:44:46.839 "traddr": "10.0.0.3", 00:44:46.839 "adrfam": "ipv4", 00:44:46.839 "trsvcid": "4420", 00:44:46.839 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:46.839 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:46.839 "hdgst": false, 00:44:46.839 "ddgst": false 00:44:46.839 }, 00:44:46.839 "method": "bdev_nvme_attach_controller" 00:44:46.839 },{ 00:44:46.839 "params": { 00:44:46.839 "name": "Nvme1", 00:44:46.839 "trtype": "tcp", 00:44:46.839 "traddr": "10.0.0.3", 00:44:46.839 "adrfam": "ipv4", 00:44:46.839 "trsvcid": "4420", 00:44:46.840 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:44:46.840 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:44:46.840 "hdgst": false, 00:44:46.840 "ddgst": false 00:44:46.840 }, 00:44:46.840 "method": "bdev_nvme_attach_controller" 00:44:46.840 }' 00:44:46.840 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:44:46.840 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:44:46.840 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:44:46.840 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:44:46.840 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:44:46.840 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:44:46.840 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:44:46.840 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:44:46.840 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:44:46.840 14:05:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:46.840 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:44:46.840 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:44:46.840 fio-3.35 00:44:46.840 Starting 2 threads 00:44:56.915 00:44:56.915 filename0: (groupid=0, jobs=1): err= 0: pid=83301: Wed Nov 20 14:05:53 2024 00:44:56.915 read: IOPS=4898, BW=19.1MiB/s (20.1MB/s)(191MiB/10001msec) 00:44:56.915 slat (nsec): min=5737, max=81782, avg=12792.19, stdev=5324.66 00:44:56.915 clat (usec): min=377, max=1808, avg=779.42, stdev=41.78 00:44:56.915 lat (usec): min=383, max=1831, avg=792.21, stdev=42.92 00:44:56.915 clat percentiles (usec): 00:44:56.915 | 1.00th=[ 685], 5.00th=[ 717], 10.00th=[ 734], 20.00th=[ 750], 00:44:56.915 | 30.00th=[ 758], 40.00th=[ 766], 50.00th=[ 783], 60.00th=[ 791], 00:44:56.915 | 70.00th=[ 799], 80.00th=[ 807], 90.00th=[ 824], 95.00th=[ 840], 00:44:56.915 | 99.00th=[ 889], 99.50th=[ 930], 99.90th=[ 996], 99.95th=[ 1029], 00:44:56.915 | 99.99th=[ 1106] 00:44:56.915 bw ( KiB/s): min=18656, max=20736, per=49.99%, avg=19594.11, stdev=471.52, samples=19 00:44:56.915 iops : min= 4664, max= 5184, avg=4898.53, stdev=117.88, samples=19 00:44:56.915 lat (usec) : 500=0.02%, 750=22.18%, 1000=77.72% 00:44:56.915 lat (msec) : 2=0.08% 00:44:56.915 cpu : usr=90.45%, sys=8.11%, ctx=11, majf=0, minf=0 00:44:56.915 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:56.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:56.915 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:56.915 issued rwts: total=48988,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:56.915 latency : target=0, window=0, percentile=100.00%, depth=4 00:44:56.915 filename1: (groupid=0, jobs=1): err= 0: pid=83302: Wed Nov 20 14:05:53 2024 00:44:56.915 read: IOPS=4901, BW=19.1MiB/s (20.1MB/s)(191MiB/10001msec) 00:44:56.915 slat (nsec): min=4536, max=65063, avg=14249.50, stdev=7301.07 00:44:56.915 clat (usec): min=392, max=1849, avg=773.35, stdev=39.62 00:44:56.915 lat (usec): min=399, max=1882, avg=787.60, stdev=41.81 00:44:56.915 clat percentiles (usec): 00:44:56.915 | 1.00th=[ 685], 5.00th=[ 717], 10.00th=[ 734], 20.00th=[ 750], 00:44:56.915 | 30.00th=[ 758], 40.00th=[ 766], 50.00th=[ 775], 60.00th=[ 783], 00:44:56.915 | 70.00th=[ 791], 80.00th=[ 799], 90.00th=[ 816], 95.00th=[ 832], 00:44:56.915 | 99.00th=[ 881], 99.50th=[ 922], 99.90th=[ 979], 99.95th=[ 996], 00:44:56.915 | 99.99th=[ 1090] 00:44:56.915 bw ( KiB/s): min=18656, max=20640, per=50.02%, avg=19607.58, stdev=470.04, samples=19 00:44:56.915 iops : min= 4664, max= 5160, avg=4901.89, stdev=117.51, samples=19 00:44:56.915 lat (usec) : 500=0.09%, 750=22.82%, 1000=77.04% 00:44:56.915 lat (msec) : 2=0.05% 00:44:56.915 cpu : usr=92.30%, sys=6.45%, ctx=9, majf=0, minf=0 00:44:56.915 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:56.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:56.915 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:56.915 issued rwts: total=49020,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:56.915 latency : target=0, window=0, percentile=100.00%, depth=4 00:44:56.915 00:44:56.915 Run status group 0 (all jobs): 00:44:56.915 READ: bw=38.3MiB/s (40.1MB/s), 19.1MiB/s-19.1MiB/s (20.1MB/s-20.1MB/s), io=383MiB (401MB), run=10001-10001msec 00:44:56.915 14:05:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:44:56.915 14:05:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:44:56.915 14:05:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:44:56.915 14:05:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:44:56.915 14:05:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:44:56.915 14:05:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:44:56.915 14:05:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:56.915 14:05:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:56.915 14:05:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:56.915 14:05:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:44:56.915 14:05:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:56.915 14:05:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:56.915 14:05:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:56.915 14:05:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:44:56.915 14:05:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:44:56.915 14:05:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:44:56.915 14:05:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:44:56.915 14:05:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:56.915 14:05:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:56.915 14:05:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:56.915 14:05:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:44:56.915 14:05:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:56.915 14:05:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:56.915 14:05:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:56.915 00:44:56.915 real 0m11.331s 00:44:56.915 user 0m19.202s 00:44:56.915 sys 0m1.821s 00:44:56.915 14:05:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:56.915 14:05:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:56.915 ************************************ 00:44:56.915 END TEST fio_dif_1_multi_subsystems 00:44:56.915 ************************************ 00:44:56.915 14:05:53 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:44:56.915 14:05:53 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:56.915 14:05:53 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:56.915 14:05:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:56.915 ************************************ 00:44:56.915 START TEST fio_dif_rand_params 00:44:56.915 ************************************ 00:44:56.915 14:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:44:56.915 14:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:44:56.915 14:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:44:56.915 14:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:44:56.915 14:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:44:56.915 14:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:44:56.915 14:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:44:56.915 14:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:44:56.915 14:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:44:56.915 14:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:44:56.915 14:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:44:56.915 14:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:44:56.915 14:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:44:56.915 14:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:44:56.915 14:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:56.915 14:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:56.915 bdev_null0 00:44:56.916 14:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:56.916 14:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:44:56.916 14:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:56.916 14:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:56.916 14:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:56.916 14:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:44:56.916 14:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:56.916 14:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:56.916 14:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:56.916 14:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:44:56.916 14:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:56.916 14:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:56.916 [2024-11-20 14:05:53.975341] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:44:56.916 14:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:56.916 14:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:44:56.916 14:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:56.916 14:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:44:56.916 14:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:56.916 14:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:44:56.916 14:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:44:56.916 14:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:44:56.916 14:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:44:56.916 14:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:44:56.916 14:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:44:56.916 14:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:44:56.916 14:05:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:44:56.916 14:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:44:56.916 14:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:44:56.916 14:05:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:44:56.916 14:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:44:56.916 14:05:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:44:56.916 14:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:44:56.916 14:05:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:44:56.916 { 00:44:56.916 "params": { 00:44:56.916 "name": "Nvme$subsystem", 00:44:56.916 "trtype": "$TEST_TRANSPORT", 00:44:56.916 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:56.916 "adrfam": "ipv4", 00:44:56.916 "trsvcid": "$NVMF_PORT", 00:44:56.916 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:56.916 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:56.916 "hdgst": ${hdgst:-false}, 00:44:56.916 "ddgst": ${ddgst:-false} 00:44:56.916 }, 00:44:56.916 "method": "bdev_nvme_attach_controller" 00:44:56.916 } 00:44:56.916 EOF 00:44:56.916 )") 00:44:56.916 14:05:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:44:56.916 14:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:44:56.916 14:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:44:56.916 14:05:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:44:56.916 14:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:44:56.916 14:05:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:44:56.916 14:05:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:44:56.916 14:05:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:44:56.916 14:05:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:44:56.916 "params": { 00:44:56.916 "name": "Nvme0", 00:44:56.916 "trtype": "tcp", 00:44:56.916 "traddr": "10.0.0.3", 00:44:56.916 "adrfam": "ipv4", 00:44:56.916 "trsvcid": "4420", 00:44:56.916 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:56.916 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:56.916 "hdgst": false, 00:44:56.916 "ddgst": false 00:44:56.916 }, 00:44:56.916 "method": "bdev_nvme_attach_controller" 00:44:56.916 }' 00:44:56.916 14:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:44:56.916 14:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:44:56.916 14:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:44:56.916 14:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:44:56.916 14:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:44:56.916 14:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:44:56.916 14:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:44:56.916 14:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:44:56.916 14:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:44:56.916 14:05:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:56.916 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:44:56.916 ... 00:44:56.916 fio-3.35 00:44:56.916 Starting 3 threads 00:45:03.486 00:45:03.486 filename0: (groupid=0, jobs=1): err= 0: pid=83464: Wed Nov 20 14:05:59 2024 00:45:03.486 read: IOPS=255, BW=31.9MiB/s (33.5MB/s)(160MiB/5001msec) 00:45:03.486 slat (nsec): min=7043, max=79238, avg=32207.68, stdev=15626.24 00:45:03.486 clat (usec): min=10982, max=12814, avg=11664.44, stdev=214.42 00:45:03.486 lat (usec): min=11017, max=12855, avg=11696.65, stdev=216.53 00:45:03.486 clat percentiles (usec): 00:45:03.486 | 1.00th=[11076], 5.00th=[11207], 10.00th=[11338], 20.00th=[11469], 00:45:03.486 | 30.00th=[11600], 40.00th=[11600], 50.00th=[11731], 60.00th=[11731], 00:45:03.486 | 70.00th=[11731], 80.00th=[11863], 90.00th=[11863], 95.00th=[11994], 00:45:03.486 | 99.00th=[12256], 99.50th=[12387], 99.90th=[12780], 99.95th=[12780], 00:45:03.486 | 99.99th=[12780] 00:45:03.486 bw ( KiB/s): min=32256, max=33792, per=33.42%, avg=32768.00, stdev=543.06, samples=9 00:45:03.486 iops : min= 252, max= 264, avg=256.00, stdev= 4.24, samples=9 00:45:03.486 lat (msec) : 20=100.00% 00:45:03.486 cpu : usr=96.34%, sys=3.08%, ctx=68, majf=0, minf=0 00:45:03.486 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:03.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:03.486 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:03.486 issued rwts: total=1278,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:03.486 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:03.486 filename0: (groupid=0, jobs=1): err= 0: pid=83465: Wed Nov 20 14:05:59 2024 00:45:03.486 read: IOPS=255, BW=31.9MiB/s (33.5MB/s)(160MiB/5001msec) 00:45:03.486 slat (nsec): min=5535, max=75655, avg=32258.45, stdev=16690.04 00:45:03.486 clat (usec): min=8897, max=15172, avg=11665.18, stdev=303.30 00:45:03.486 lat (usec): min=8913, max=15216, avg=11697.44, stdev=304.31 00:45:03.486 clat percentiles (usec): 00:45:03.486 | 1.00th=[11076], 5.00th=[11207], 10.00th=[11338], 20.00th=[11469], 00:45:03.486 | 30.00th=[11600], 40.00th=[11600], 50.00th=[11731], 60.00th=[11731], 00:45:03.486 | 70.00th=[11731], 80.00th=[11863], 90.00th=[11863], 95.00th=[11994], 00:45:03.486 | 99.00th=[12125], 99.50th=[12649], 99.90th=[15139], 99.95th=[15139], 00:45:03.486 | 99.99th=[15139] 00:45:03.486 bw ( KiB/s): min=32256, max=33792, per=33.42%, avg=32768.00, stdev=543.06, samples=9 00:45:03.486 iops : min= 252, max= 264, avg=256.00, stdev= 4.24, samples=9 00:45:03.486 lat (msec) : 10=0.23%, 20=99.77% 00:45:03.486 cpu : usr=96.76%, sys=2.74%, ctx=8, majf=0, minf=0 00:45:03.486 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:03.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:03.486 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:03.486 issued rwts: total=1278,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:03.486 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:03.487 filename0: (groupid=0, jobs=1): err= 0: pid=83466: Wed Nov 20 14:05:59 2024 00:45:03.487 read: IOPS=255, BW=32.0MiB/s (33.5MB/s)(160MiB/5009msec) 00:45:03.487 slat (usec): min=5, max=110, avg=28.83, stdev=16.67 00:45:03.487 clat (usec): min=7867, max=12851, avg=11665.58, stdev=285.38 00:45:03.487 lat (usec): min=7880, max=12889, avg=11694.40, stdev=285.85 00:45:03.487 clat percentiles (usec): 00:45:03.487 | 1.00th=[11076], 5.00th=[11207], 10.00th=[11338], 20.00th=[11469], 00:45:03.487 | 30.00th=[11600], 40.00th=[11731], 50.00th=[11731], 60.00th=[11731], 00:45:03.487 | 70.00th=[11731], 80.00th=[11863], 90.00th=[11863], 95.00th=[11994], 00:45:03.487 | 99.00th=[12125], 99.50th=[12125], 99.90th=[12780], 99.95th=[12911], 00:45:03.487 | 99.99th=[12911] 00:45:03.487 bw ( KiB/s): min=32191, max=33024, per=33.36%, avg=32710.30, stdev=405.42, samples=10 00:45:03.487 iops : min= 251, max= 258, avg=255.50, stdev= 3.24, samples=10 00:45:03.487 lat (msec) : 10=0.23%, 20=99.77% 00:45:03.487 cpu : usr=94.81%, sys=4.63%, ctx=9, majf=0, minf=0 00:45:03.487 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:03.487 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:03.487 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:03.487 issued rwts: total=1281,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:03.487 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:03.487 00:45:03.487 Run status group 0 (all jobs): 00:45:03.487 READ: bw=95.8MiB/s (100MB/s), 31.9MiB/s-32.0MiB/s (33.5MB/s-33.5MB/s), io=480MiB (503MB), run=5001-5009msec 00:45:03.487 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:45:03.487 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:45:03.487 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:03.487 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:03.487 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:45:03.487 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:03.487 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:03.487 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:03.487 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:03.487 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:03.487 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:03.487 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:03.487 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:03.487 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:45:03.487 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:45:03.487 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:45:03.487 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:45:03.487 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:45:03.487 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:45:03.487 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:45:03.487 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:45:03.487 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:03.487 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:45:03.487 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:45:03.487 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:45:03.487 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:03.487 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:03.487 bdev_null0 00:45:03.487 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:03.487 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:03.487 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:03.487 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:03.487 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:03.487 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:03.487 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:03.487 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:03.487 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:03.487 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:45:03.487 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:03.487 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:03.487 [2024-11-20 14:06:00.076766] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:45:03.487 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:03.487 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:03.487 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:45:03.487 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:45:03.487 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:45:03.487 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:03.487 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:03.487 bdev_null1 00:45:03.487 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:03.487 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:45:03.487 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:03.487 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:03.487 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:03.487 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:45:03.487 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:03.488 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:03.488 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:03.488 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:45:03.488 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:03.488 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:03.488 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:03.488 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:03.488 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:45:03.488 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:45:03.488 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:45:03.488 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:03.488 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:03.488 bdev_null2 00:45:03.488 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:03.488 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:45:03.488 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:03.488 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:03.488 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:03.488 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:45:03.488 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:03.488 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:03.488 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:03.488 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:45:03.488 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:03.488 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:03.488 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:03.488 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:45:03.488 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:45:03.488 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:45:03.488 14:06:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:45:03.488 14:06:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:45:03.488 14:06:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:03.488 14:06:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:03.488 { 00:45:03.488 "params": { 00:45:03.488 "name": "Nvme$subsystem", 00:45:03.488 "trtype": "$TEST_TRANSPORT", 00:45:03.488 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:03.488 "adrfam": "ipv4", 00:45:03.488 "trsvcid": "$NVMF_PORT", 00:45:03.488 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:03.488 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:03.488 "hdgst": ${hdgst:-false}, 00:45:03.488 "ddgst": ${ddgst:-false} 00:45:03.488 }, 00:45:03.488 "method": "bdev_nvme_attach_controller" 00:45:03.488 } 00:45:03.488 EOF 00:45:03.488 )") 00:45:03.488 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:03.488 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:45:03.488 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:03.488 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:45:03.488 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:45:03.488 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:45:03.488 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:03.488 14:06:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:45:03.488 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:45:03.488 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:45:03.488 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:45:03.488 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:45:03.488 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:45:03.488 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:45:03.488 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:45:03.488 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:03.488 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:45:03.488 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:45:03.488 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:45:03.488 14:06:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:03.488 14:06:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:03.488 { 00:45:03.488 "params": { 00:45:03.488 "name": "Nvme$subsystem", 00:45:03.488 "trtype": "$TEST_TRANSPORT", 00:45:03.488 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:03.488 "adrfam": "ipv4", 00:45:03.488 "trsvcid": "$NVMF_PORT", 00:45:03.488 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:03.488 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:03.488 "hdgst": ${hdgst:-false}, 00:45:03.488 "ddgst": ${ddgst:-false} 00:45:03.488 }, 00:45:03.488 "method": "bdev_nvme_attach_controller" 00:45:03.488 } 00:45:03.488 EOF 00:45:03.488 )") 00:45:03.488 14:06:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:45:03.488 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:45:03.488 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:03.489 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:45:03.489 14:06:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:03.489 14:06:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:03.489 { 00:45:03.489 "params": { 00:45:03.489 "name": "Nvme$subsystem", 00:45:03.489 "trtype": "$TEST_TRANSPORT", 00:45:03.489 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:03.489 "adrfam": "ipv4", 00:45:03.489 "trsvcid": "$NVMF_PORT", 00:45:03.489 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:03.489 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:03.489 "hdgst": ${hdgst:-false}, 00:45:03.489 "ddgst": ${ddgst:-false} 00:45:03.489 }, 00:45:03.489 "method": "bdev_nvme_attach_controller" 00:45:03.489 } 00:45:03.489 EOF 00:45:03.489 )") 00:45:03.489 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:45:03.489 14:06:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:03.489 14:06:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:45:03.489 14:06:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:45:03.489 14:06:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:45:03.489 14:06:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:45:03.489 "params": { 00:45:03.489 "name": "Nvme0", 00:45:03.489 "trtype": "tcp", 00:45:03.489 "traddr": "10.0.0.3", 00:45:03.489 "adrfam": "ipv4", 00:45:03.489 "trsvcid": "4420", 00:45:03.489 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:03.489 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:03.489 "hdgst": false, 00:45:03.489 "ddgst": false 00:45:03.489 }, 00:45:03.489 "method": "bdev_nvme_attach_controller" 00:45:03.489 },{ 00:45:03.489 "params": { 00:45:03.489 "name": "Nvme1", 00:45:03.489 "trtype": "tcp", 00:45:03.489 "traddr": "10.0.0.3", 00:45:03.489 "adrfam": "ipv4", 00:45:03.489 "trsvcid": "4420", 00:45:03.489 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:45:03.489 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:45:03.489 "hdgst": false, 00:45:03.489 "ddgst": false 00:45:03.489 }, 00:45:03.489 "method": "bdev_nvme_attach_controller" 00:45:03.489 },{ 00:45:03.489 "params": { 00:45:03.489 "name": "Nvme2", 00:45:03.489 "trtype": "tcp", 00:45:03.489 "traddr": "10.0.0.3", 00:45:03.489 "adrfam": "ipv4", 00:45:03.489 "trsvcid": "4420", 00:45:03.489 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:45:03.489 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:45:03.489 "hdgst": false, 00:45:03.489 "ddgst": false 00:45:03.489 }, 00:45:03.489 "method": "bdev_nvme_attach_controller" 00:45:03.489 }' 00:45:03.489 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:45:03.489 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:45:03.489 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:45:03.489 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:45:03.489 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:45:03.489 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:45:03.489 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:45:03.489 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:45:03.489 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:45:03.489 14:06:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:03.489 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:45:03.489 ... 00:45:03.489 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:45:03.489 ... 00:45:03.489 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:45:03.489 ... 00:45:03.489 fio-3.35 00:45:03.489 Starting 24 threads 00:45:15.688 00:45:15.688 filename0: (groupid=0, jobs=1): err= 0: pid=83569: Wed Nov 20 14:06:11 2024 00:45:15.688 read: IOPS=258, BW=1034KiB/s (1058kB/s)(10.1MiB/10011msec) 00:45:15.688 slat (usec): min=3, max=8033, avg=27.19, stdev=221.56 00:45:15.688 clat (msec): min=14, max=143, avg=61.79, stdev=18.51 00:45:15.688 lat (msec): min=14, max=143, avg=61.82, stdev=18.51 00:45:15.688 clat percentiles (msec): 00:45:15.688 | 1.00th=[ 29], 5.00th=[ 37], 10.00th=[ 41], 20.00th=[ 46], 00:45:15.688 | 30.00th=[ 49], 40.00th=[ 56], 50.00th=[ 62], 60.00th=[ 65], 00:45:15.688 | 70.00th=[ 71], 80.00th=[ 74], 90.00th=[ 84], 95.00th=[ 96], 00:45:15.688 | 99.00th=[ 114], 99.50th=[ 131], 99.90th=[ 144], 99.95th=[ 144], 00:45:15.688 | 99.99th=[ 144] 00:45:15.688 bw ( KiB/s): min= 848, max= 1144, per=4.20%, avg=1022.32, stdev=80.16, samples=19 00:45:15.688 iops : min= 212, max= 286, avg=255.58, stdev=20.04, samples=19 00:45:15.688 lat (msec) : 20=0.23%, 50=33.01%, 100=63.32%, 250=3.44% 00:45:15.688 cpu : usr=38.76%, sys=1.30%, ctx=1409, majf=0, minf=9 00:45:15.688 IO depths : 1=0.1%, 2=0.4%, 4=1.5%, 8=82.3%, 16=15.7%, 32=0.0%, >=64=0.0% 00:45:15.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:15.688 complete : 0=0.0%, 4=87.2%, 8=12.4%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:15.688 issued rwts: total=2587,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:15.688 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:15.689 filename0: (groupid=0, jobs=1): err= 0: pid=83570: Wed Nov 20 14:06:11 2024 00:45:15.689 read: IOPS=256, BW=1024KiB/s (1049kB/s)(10.0MiB/10002msec) 00:45:15.689 slat (usec): min=6, max=8032, avg=34.16, stdev=275.46 00:45:15.689 clat (msec): min=2, max=164, avg=62.34, stdev=22.89 00:45:15.689 lat (msec): min=2, max=164, avg=62.38, stdev=22.89 00:45:15.689 clat percentiles (msec): 00:45:15.689 | 1.00th=[ 5], 5.00th=[ 30], 10.00th=[ 40], 20.00th=[ 45], 00:45:15.689 | 30.00th=[ 50], 40.00th=[ 56], 50.00th=[ 64], 60.00th=[ 67], 00:45:15.689 | 70.00th=[ 72], 80.00th=[ 79], 90.00th=[ 91], 95.00th=[ 96], 00:45:15.689 | 99.00th=[ 148], 99.50th=[ 165], 99.90th=[ 165], 99.95th=[ 165], 00:45:15.689 | 99.99th=[ 165] 00:45:15.689 bw ( KiB/s): min= 640, max= 1176, per=4.05%, avg=985.68, stdev=163.75, samples=19 00:45:15.689 iops : min= 160, max= 294, avg=246.42, stdev=40.94, samples=19 00:45:15.689 lat (msec) : 4=0.35%, 10=2.19%, 20=0.12%, 50=29.13%, 100=64.15% 00:45:15.689 lat (msec) : 250=4.06% 00:45:15.689 cpu : usr=43.36%, sys=1.26%, ctx=1230, majf=0, minf=9 00:45:15.689 IO depths : 1=0.1%, 2=1.4%, 4=5.5%, 8=78.0%, 16=15.1%, 32=0.0%, >=64=0.0% 00:45:15.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:15.689 complete : 0=0.0%, 4=88.4%, 8=10.4%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:15.689 issued rwts: total=2561,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:15.689 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:15.689 filename0: (groupid=0, jobs=1): err= 0: pid=83571: Wed Nov 20 14:06:11 2024 00:45:15.689 read: IOPS=252, BW=1010KiB/s (1034kB/s)(9.90MiB/10035msec) 00:45:15.689 slat (usec): min=5, max=8037, avg=43.71, stdev=347.86 00:45:15.689 clat (msec): min=11, max=144, avg=63.14, stdev=19.78 00:45:15.689 lat (msec): min=11, max=144, avg=63.18, stdev=19.78 00:45:15.689 clat percentiles (msec): 00:45:15.689 | 1.00th=[ 17], 5.00th=[ 29], 10.00th=[ 41], 20.00th=[ 48], 00:45:15.689 | 30.00th=[ 54], 40.00th=[ 60], 50.00th=[ 64], 60.00th=[ 69], 00:45:15.689 | 70.00th=[ 72], 80.00th=[ 79], 90.00th=[ 88], 95.00th=[ 94], 00:45:15.689 | 99.00th=[ 117], 99.50th=[ 136], 99.90th=[ 146], 99.95th=[ 146], 00:45:15.689 | 99.99th=[ 146] 00:45:15.689 bw ( KiB/s): min= 816, max= 1724, per=4.14%, avg=1007.00, stdev=186.84, samples=20 00:45:15.689 iops : min= 204, max= 431, avg=251.75, stdev=46.71, samples=20 00:45:15.689 lat (msec) : 20=1.93%, 50=24.94%, 100=70.44%, 250=2.68% 00:45:15.689 cpu : usr=38.44%, sys=0.83%, ctx=1094, majf=0, minf=9 00:45:15.689 IO depths : 1=0.1%, 2=0.4%, 4=1.5%, 8=81.4%, 16=16.6%, 32=0.0%, >=64=0.0% 00:45:15.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:15.689 complete : 0=0.0%, 4=88.0%, 8=11.6%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:15.689 issued rwts: total=2534,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:15.689 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:15.689 filename0: (groupid=0, jobs=1): err= 0: pid=83572: Wed Nov 20 14:06:11 2024 00:45:15.689 read: IOPS=269, BW=1077KiB/s (1103kB/s)(10.6MiB/10061msec) 00:45:15.689 slat (usec): min=4, max=8035, avg=24.18, stdev=224.73 00:45:15.689 clat (usec): min=1405, max=144686, avg=59252.15, stdev=23844.96 00:45:15.689 lat (usec): min=1414, max=144694, avg=59276.33, stdev=23840.00 00:45:15.689 clat percentiles (usec): 00:45:15.689 | 1.00th=[ 1532], 5.00th=[ 4686], 10.00th=[ 24249], 20.00th=[ 44827], 00:45:15.689 | 30.00th=[ 49546], 40.00th=[ 56886], 50.00th=[ 62129], 60.00th=[ 66323], 00:45:15.689 | 70.00th=[ 70779], 80.00th=[ 77071], 90.00th=[ 85459], 95.00th=[ 93848], 00:45:15.689 | 99.00th=[113771], 99.50th=[129500], 99.90th=[143655], 99.95th=[143655], 00:45:15.689 | 99.99th=[143655] 00:45:15.689 bw ( KiB/s): min= 816, max= 3040, per=4.43%, avg=1077.20, stdev=468.23, samples=20 00:45:15.689 iops : min= 204, max= 760, avg=269.30, stdev=117.06, samples=20 00:45:15.689 lat (msec) : 2=1.18%, 4=3.10%, 10=1.11%, 20=3.40%, 50=21.85% 00:45:15.689 lat (msec) : 100=66.70%, 250=2.66% 00:45:15.689 cpu : usr=42.24%, sys=1.37%, ctx=1218, majf=0, minf=0 00:45:15.689 IO depths : 1=0.3%, 2=0.8%, 4=2.0%, 8=80.8%, 16=16.1%, 32=0.0%, >=64=0.0% 00:45:15.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:15.689 complete : 0=0.0%, 4=88.0%, 8=11.5%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:15.689 issued rwts: total=2709,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:15.689 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:15.689 filename0: (groupid=0, jobs=1): err= 0: pid=83573: Wed Nov 20 14:06:11 2024 00:45:15.689 read: IOPS=258, BW=1033KiB/s (1057kB/s)(10.1MiB/10028msec) 00:45:15.689 slat (usec): min=3, max=8023, avg=28.47, stdev=208.99 00:45:15.689 clat (msec): min=23, max=151, avg=61.79, stdev=17.67 00:45:15.689 lat (msec): min=23, max=151, avg=61.81, stdev=17.68 00:45:15.689 clat percentiles (msec): 00:45:15.689 | 1.00th=[ 35], 5.00th=[ 37], 10.00th=[ 41], 20.00th=[ 47], 00:45:15.689 | 30.00th=[ 50], 40.00th=[ 56], 50.00th=[ 62], 60.00th=[ 65], 00:45:15.689 | 70.00th=[ 70], 80.00th=[ 75], 90.00th=[ 84], 95.00th=[ 93], 00:45:15.689 | 99.00th=[ 116], 99.50th=[ 129], 99.90th=[ 153], 99.95th=[ 153], 00:45:15.689 | 99.99th=[ 153] 00:45:15.689 bw ( KiB/s): min= 824, max= 1296, per=4.24%, avg=1032.00, stdev=110.61, samples=20 00:45:15.689 iops : min= 206, max= 324, avg=258.00, stdev=27.65, samples=20 00:45:15.689 lat (msec) : 50=32.10%, 100=65.51%, 250=2.39% 00:45:15.689 cpu : usr=41.70%, sys=1.13%, ctx=1227, majf=0, minf=9 00:45:15.689 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=82.6%, 16=15.9%, 32=0.0%, >=64=0.0% 00:45:15.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:15.689 complete : 0=0.0%, 4=87.2%, 8=12.6%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:15.689 issued rwts: total=2589,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:15.689 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:15.689 filename0: (groupid=0, jobs=1): err= 0: pid=83574: Wed Nov 20 14:06:11 2024 00:45:15.689 read: IOPS=250, BW=1000KiB/s (1024kB/s)(9.79MiB/10027msec) 00:45:15.689 slat (usec): min=3, max=10039, avg=55.88, stdev=501.31 00:45:15.689 clat (msec): min=18, max=144, avg=63.76, stdev=19.14 00:45:15.689 lat (msec): min=18, max=144, avg=63.81, stdev=19.15 00:45:15.689 clat percentiles (msec): 00:45:15.689 | 1.00th=[ 23], 5.00th=[ 35], 10.00th=[ 40], 20.00th=[ 48], 00:45:15.689 | 30.00th=[ 53], 40.00th=[ 60], 50.00th=[ 63], 60.00th=[ 69], 00:45:15.689 | 70.00th=[ 72], 80.00th=[ 80], 90.00th=[ 86], 95.00th=[ 96], 00:45:15.689 | 99.00th=[ 113], 99.50th=[ 136], 99.90th=[ 146], 99.95th=[ 146], 00:45:15.689 | 99.99th=[ 146] 00:45:15.689 bw ( KiB/s): min= 784, max= 1474, per=4.09%, avg=996.00, stdev=139.90, samples=20 00:45:15.689 iops : min= 196, max= 368, avg=248.95, stdev=34.86, samples=20 00:45:15.689 lat (msec) : 20=0.36%, 50=27.40%, 100=69.01%, 250=3.23% 00:45:15.689 cpu : usr=32.95%, sys=0.57%, ctx=952, majf=0, minf=9 00:45:15.689 IO depths : 1=0.1%, 2=0.4%, 4=1.4%, 8=81.6%, 16=16.6%, 32=0.0%, >=64=0.0% 00:45:15.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:15.689 complete : 0=0.0%, 4=87.9%, 8=11.8%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:15.689 issued rwts: total=2507,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:15.689 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:15.689 filename0: (groupid=0, jobs=1): err= 0: pid=83575: Wed Nov 20 14:06:11 2024 00:45:15.689 read: IOPS=258, BW=1034KiB/s (1059kB/s)(10.1MiB/10013msec) 00:45:15.689 slat (usec): min=3, max=8073, avg=35.23, stdev=273.68 00:45:15.689 clat (msec): min=15, max=143, avg=61.72, stdev=18.16 00:45:15.689 lat (msec): min=15, max=143, avg=61.76, stdev=18.16 00:45:15.689 clat percentiles (msec): 00:45:15.689 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 40], 20.00th=[ 47], 00:45:15.689 | 30.00th=[ 49], 40.00th=[ 57], 50.00th=[ 62], 60.00th=[ 66], 00:45:15.689 | 70.00th=[ 71], 80.00th=[ 74], 90.00th=[ 84], 95.00th=[ 93], 00:45:15.689 | 99.00th=[ 117], 99.50th=[ 131], 99.90th=[ 144], 99.95th=[ 144], 00:45:15.689 | 99.99th=[ 144] 00:45:15.689 bw ( KiB/s): min= 872, max= 1349, per=4.19%, avg=1019.63, stdev=107.56, samples=19 00:45:15.689 iops : min= 218, max= 337, avg=254.89, stdev=26.85, samples=19 00:45:15.689 lat (msec) : 20=0.27%, 50=31.90%, 100=65.51%, 250=2.32% 00:45:15.689 cpu : usr=39.81%, sys=1.12%, ctx=1194, majf=0, minf=9 00:45:15.689 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=83.0%, 16=16.1%, 32=0.0%, >=64=0.0% 00:45:15.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:15.689 complete : 0=0.0%, 4=87.2%, 8=12.6%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:15.689 issued rwts: total=2589,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:15.689 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:15.689 filename0: (groupid=0, jobs=1): err= 0: pid=83576: Wed Nov 20 14:06:11 2024 00:45:15.689 read: IOPS=255, BW=1023KiB/s (1048kB/s)(10.0MiB/10005msec) 00:45:15.689 slat (usec): min=3, max=8031, avg=29.68, stdev=222.48 00:45:15.689 clat (msec): min=2, max=144, avg=62.40, stdev=19.99 00:45:15.689 lat (msec): min=2, max=144, avg=62.43, stdev=19.99 00:45:15.689 clat percentiles (msec): 00:45:15.689 | 1.00th=[ 6], 5.00th=[ 38], 10.00th=[ 41], 20.00th=[ 47], 00:45:15.689 | 30.00th=[ 49], 40.00th=[ 57], 50.00th=[ 62], 60.00th=[ 67], 00:45:15.689 | 70.00th=[ 72], 80.00th=[ 77], 90.00th=[ 88], 95.00th=[ 96], 00:45:15.689 | 99.00th=[ 118], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:45:15.689 | 99.99th=[ 146] 00:45:15.689 bw ( KiB/s): min= 704, max= 1152, per=4.10%, avg=998.26, stdev=134.86, samples=19 00:45:15.689 iops : min= 176, max= 288, avg=249.53, stdev=33.70, samples=19 00:45:15.689 lat (msec) : 4=0.51%, 10=0.74%, 20=0.23%, 50=31.84%, 100=63.91% 00:45:15.689 lat (msec) : 250=2.77% 00:45:15.689 cpu : usr=38.19%, sys=1.21%, ctx=1208, majf=0, minf=9 00:45:15.689 IO depths : 1=0.1%, 2=0.8%, 4=2.8%, 8=81.0%, 16=15.4%, 32=0.0%, >=64=0.0% 00:45:15.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:15.689 complete : 0=0.0%, 4=87.5%, 8=11.8%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:15.689 issued rwts: total=2560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:15.689 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:15.689 filename1: (groupid=0, jobs=1): err= 0: pid=83577: Wed Nov 20 14:06:11 2024 00:45:15.689 read: IOPS=248, BW=994KiB/s (1017kB/s)(9964KiB/10028msec) 00:45:15.689 slat (usec): min=3, max=8035, avg=38.41, stdev=307.25 00:45:15.689 clat (msec): min=14, max=143, avg=64.19, stdev=19.18 00:45:15.690 lat (msec): min=14, max=143, avg=64.23, stdev=19.18 00:45:15.690 clat percentiles (msec): 00:45:15.690 | 1.00th=[ 20], 5.00th=[ 32], 10.00th=[ 42], 20.00th=[ 48], 00:45:15.690 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 64], 60.00th=[ 70], 00:45:15.690 | 70.00th=[ 72], 80.00th=[ 80], 90.00th=[ 86], 95.00th=[ 95], 00:45:15.690 | 99.00th=[ 121], 99.50th=[ 126], 99.90th=[ 144], 99.95th=[ 144], 00:45:15.690 | 99.99th=[ 144] 00:45:15.690 bw ( KiB/s): min= 760, max= 1672, per=4.07%, avg=989.55, stdev=181.60, samples=20 00:45:15.690 iops : min= 190, max= 418, avg=247.35, stdev=45.37, samples=20 00:45:15.690 lat (msec) : 20=1.20%, 50=22.96%, 100=72.42%, 250=3.41% 00:45:15.690 cpu : usr=32.90%, sys=0.66%, ctx=939, majf=0, minf=9 00:45:15.690 IO depths : 1=0.1%, 2=0.1%, 4=0.4%, 8=82.4%, 16=17.0%, 32=0.0%, >=64=0.0% 00:45:15.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:15.690 complete : 0=0.0%, 4=87.9%, 8=12.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:15.690 issued rwts: total=2491,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:15.690 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:15.690 filename1: (groupid=0, jobs=1): err= 0: pid=83578: Wed Nov 20 14:06:11 2024 00:45:15.690 read: IOPS=246, BW=987KiB/s (1011kB/s)(9908KiB/10036msec) 00:45:15.690 slat (usec): min=5, max=8024, avg=21.05, stdev=180.36 00:45:15.690 clat (msec): min=13, max=141, avg=64.68, stdev=20.16 00:45:15.690 lat (msec): min=13, max=141, avg=64.70, stdev=20.16 00:45:15.690 clat percentiles (msec): 00:45:15.690 | 1.00th=[ 14], 5.00th=[ 29], 10.00th=[ 40], 20.00th=[ 48], 00:45:15.690 | 30.00th=[ 59], 40.00th=[ 61], 50.00th=[ 64], 60.00th=[ 71], 00:45:15.690 | 70.00th=[ 72], 80.00th=[ 82], 90.00th=[ 88], 95.00th=[ 96], 00:45:15.690 | 99.00th=[ 120], 99.50th=[ 142], 99.90th=[ 142], 99.95th=[ 142], 00:45:15.690 | 99.99th=[ 142] 00:45:15.690 bw ( KiB/s): min= 752, max= 1768, per=4.04%, avg=984.00, stdev=211.21, samples=20 00:45:15.690 iops : min= 188, max= 442, avg=246.00, stdev=52.80, samples=20 00:45:15.690 lat (msec) : 20=3.23%, 50=20.83%, 100=73.15%, 250=2.79% 00:45:15.690 cpu : usr=35.27%, sys=1.03%, ctx=992, majf=0, minf=9 00:45:15.690 IO depths : 1=0.1%, 2=1.1%, 4=4.3%, 8=78.2%, 16=16.3%, 32=0.0%, >=64=0.0% 00:45:15.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:15.690 complete : 0=0.0%, 4=88.8%, 8=10.2%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:15.690 issued rwts: total=2477,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:15.690 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:15.690 filename1: (groupid=0, jobs=1): err= 0: pid=83579: Wed Nov 20 14:06:11 2024 00:45:15.690 read: IOPS=253, BW=1015KiB/s (1039kB/s)(9.94MiB/10027msec) 00:45:15.690 slat (usec): min=3, max=4030, avg=17.76, stdev=80.23 00:45:15.690 clat (msec): min=12, max=145, avg=62.96, stdev=19.79 00:45:15.690 lat (msec): min=12, max=145, avg=62.97, stdev=19.79 00:45:15.690 clat percentiles (msec): 00:45:15.690 | 1.00th=[ 14], 5.00th=[ 31], 10.00th=[ 40], 20.00th=[ 48], 00:45:15.690 | 30.00th=[ 54], 40.00th=[ 59], 50.00th=[ 63], 60.00th=[ 69], 00:45:15.690 | 70.00th=[ 72], 80.00th=[ 78], 90.00th=[ 86], 95.00th=[ 96], 00:45:15.690 | 99.00th=[ 115], 99.50th=[ 136], 99.90th=[ 146], 99.95th=[ 146], 00:45:15.690 | 99.99th=[ 146] 00:45:15.690 bw ( KiB/s): min= 840, max= 1824, per=4.16%, avg=1012.00, stdev=200.99, samples=20 00:45:15.690 iops : min= 210, max= 456, avg=253.00, stdev=50.25, samples=20 00:45:15.690 lat (msec) : 20=2.79%, 50=23.55%, 100=70.36%, 250=3.30% 00:45:15.690 cpu : usr=35.13%, sys=1.15%, ctx=972, majf=0, minf=9 00:45:15.690 IO depths : 1=0.1%, 2=0.4%, 4=1.5%, 8=81.4%, 16=16.5%, 32=0.0%, >=64=0.0% 00:45:15.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:15.690 complete : 0=0.0%, 4=88.0%, 8=11.7%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:15.690 issued rwts: total=2544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:15.690 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:15.690 filename1: (groupid=0, jobs=1): err= 0: pid=83580: Wed Nov 20 14:06:11 2024 00:45:15.690 read: IOPS=263, BW=1053KiB/s (1078kB/s)(10.3MiB/10016msec) 00:45:15.690 slat (usec): min=4, max=8040, avg=35.08, stdev=283.52 00:45:15.690 clat (msec): min=15, max=143, avg=60.61, stdev=18.65 00:45:15.690 lat (msec): min=15, max=143, avg=60.64, stdev=18.65 00:45:15.690 clat percentiles (msec): 00:45:15.690 | 1.00th=[ 19], 5.00th=[ 34], 10.00th=[ 40], 20.00th=[ 46], 00:45:15.690 | 30.00th=[ 49], 40.00th=[ 55], 50.00th=[ 61], 60.00th=[ 65], 00:45:15.690 | 70.00th=[ 70], 80.00th=[ 74], 90.00th=[ 83], 95.00th=[ 93], 00:45:15.690 | 99.00th=[ 112], 99.50th=[ 128], 99.90th=[ 144], 99.95th=[ 144], 00:45:15.690 | 99.99th=[ 144] 00:45:15.690 bw ( KiB/s): min= 824, max= 1498, per=4.32%, avg=1050.90, stdev=148.36, samples=20 00:45:15.690 iops : min= 206, max= 374, avg=262.70, stdev=37.01, samples=20 00:45:15.690 lat (msec) : 20=1.10%, 50=31.44%, 100=65.30%, 250=2.16% 00:45:15.690 cpu : usr=42.37%, sys=1.12%, ctx=1230, majf=0, minf=9 00:45:15.690 IO depths : 1=0.1%, 2=0.1%, 4=0.4%, 8=83.5%, 16=15.9%, 32=0.0%, >=64=0.0% 00:45:15.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:15.690 complete : 0=0.0%, 4=87.0%, 8=12.9%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:15.690 issued rwts: total=2637,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:15.690 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:15.690 filename1: (groupid=0, jobs=1): err= 0: pid=83581: Wed Nov 20 14:06:11 2024 00:45:15.690 read: IOPS=262, BW=1049KiB/s (1074kB/s)(10.3MiB/10022msec) 00:45:15.690 slat (usec): min=3, max=8033, avg=29.40, stdev=207.86 00:45:15.690 clat (msec): min=13, max=143, avg=60.84, stdev=18.10 00:45:15.690 lat (msec): min=13, max=143, avg=60.87, stdev=18.11 00:45:15.690 clat percentiles (msec): 00:45:15.690 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 40], 20.00th=[ 46], 00:45:15.690 | 30.00th=[ 48], 40.00th=[ 56], 50.00th=[ 61], 60.00th=[ 65], 00:45:15.690 | 70.00th=[ 70], 80.00th=[ 74], 90.00th=[ 84], 95.00th=[ 92], 00:45:15.690 | 99.00th=[ 115], 99.50th=[ 132], 99.90th=[ 144], 99.95th=[ 144], 00:45:15.690 | 99.99th=[ 144] 00:45:15.690 bw ( KiB/s): min= 864, max= 1448, per=4.30%, avg=1047.35, stdev=126.12, samples=20 00:45:15.690 iops : min= 216, max= 362, avg=261.80, stdev=31.50, samples=20 00:45:15.690 lat (msec) : 20=0.23%, 50=33.24%, 100=64.24%, 250=2.28% 00:45:15.690 cpu : usr=42.39%, sys=0.94%, ctx=1247, majf=0, minf=9 00:45:15.690 IO depths : 1=0.1%, 2=0.1%, 4=0.4%, 8=83.5%, 16=15.9%, 32=0.0%, >=64=0.0% 00:45:15.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:15.690 complete : 0=0.0%, 4=86.9%, 8=13.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:15.690 issued rwts: total=2629,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:15.690 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:15.690 filename1: (groupid=0, jobs=1): err= 0: pid=83582: Wed Nov 20 14:06:11 2024 00:45:15.690 read: IOPS=256, BW=1024KiB/s (1049kB/s)(10.0MiB/10042msec) 00:45:15.690 slat (usec): min=5, max=6047, avg=22.24, stdev=168.78 00:45:15.690 clat (msec): min=2, max=160, avg=62.30, stdev=23.24 00:45:15.690 lat (msec): min=2, max=160, avg=62.32, stdev=23.25 00:45:15.690 clat percentiles (msec): 00:45:15.690 | 1.00th=[ 3], 5.00th=[ 14], 10.00th=[ 31], 20.00th=[ 48], 00:45:15.690 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 65], 60.00th=[ 70], 00:45:15.690 | 70.00th=[ 72], 80.00th=[ 80], 90.00th=[ 87], 95.00th=[ 95], 00:45:15.690 | 99.00th=[ 118], 99.50th=[ 161], 99.90th=[ 161], 99.95th=[ 161], 00:45:15.690 | 99.99th=[ 161] 00:45:15.690 bw ( KiB/s): min= 768, max= 2560, per=4.20%, avg=1022.40, stdev=369.07, samples=20 00:45:15.690 iops : min= 192, max= 640, avg=255.60, stdev=92.27, samples=20 00:45:15.690 lat (msec) : 4=3.11%, 10=0.70%, 20=3.03%, 50=17.73%, 100=73.06% 00:45:15.690 lat (msec) : 250=2.37% 00:45:15.690 cpu : usr=37.67%, sys=1.28%, ctx=1060, majf=0, minf=0 00:45:15.690 IO depths : 1=0.2%, 2=1.4%, 4=4.8%, 8=77.5%, 16=16.1%, 32=0.0%, >=64=0.0% 00:45:15.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:15.690 complete : 0=0.0%, 4=89.0%, 8=9.9%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:15.690 issued rwts: total=2572,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:15.690 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:15.690 filename1: (groupid=0, jobs=1): err= 0: pid=83583: Wed Nov 20 14:06:11 2024 00:45:15.690 read: IOPS=249, BW=998KiB/s (1022kB/s)(9988KiB/10012msec) 00:45:15.690 slat (usec): min=3, max=8014, avg=25.40, stdev=228.10 00:45:15.690 clat (msec): min=13, max=148, avg=64.05, stdev=17.46 00:45:15.690 lat (msec): min=13, max=148, avg=64.08, stdev=17.45 00:45:15.690 clat percentiles (msec): 00:45:15.690 | 1.00th=[ 33], 5.00th=[ 40], 10.00th=[ 45], 20.00th=[ 48], 00:45:15.690 | 30.00th=[ 54], 40.00th=[ 60], 50.00th=[ 63], 60.00th=[ 69], 00:45:15.690 | 70.00th=[ 72], 80.00th=[ 78], 90.00th=[ 85], 95.00th=[ 94], 00:45:15.690 | 99.00th=[ 115], 99.50th=[ 132], 99.90th=[ 148], 99.95th=[ 148], 00:45:15.690 | 99.99th=[ 148] 00:45:15.690 bw ( KiB/s): min= 816, max= 1128, per=4.06%, avg=988.74, stdev=77.53, samples=19 00:45:15.690 iops : min= 204, max= 282, avg=247.16, stdev=19.35, samples=19 00:45:15.690 lat (msec) : 20=0.24%, 50=25.19%, 100=72.53%, 250=2.04% 00:45:15.690 cpu : usr=36.02%, sys=1.15%, ctx=1077, majf=0, minf=9 00:45:15.690 IO depths : 1=0.1%, 2=0.3%, 4=0.9%, 8=82.3%, 16=16.4%, 32=0.0%, >=64=0.0% 00:45:15.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:15.690 complete : 0=0.0%, 4=87.6%, 8=12.2%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:15.691 issued rwts: total=2497,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:15.691 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:15.691 filename1: (groupid=0, jobs=1): err= 0: pid=83584: Wed Nov 20 14:06:11 2024 00:45:15.691 read: IOPS=264, BW=1056KiB/s (1082kB/s)(10.3MiB/10001msec) 00:45:15.691 slat (usec): min=3, max=4050, avg=27.08, stdev=143.59 00:45:15.691 clat (usec): min=1204, max=167811, avg=60478.29, stdev=23183.35 00:45:15.691 lat (usec): min=1212, max=167858, avg=60505.38, stdev=23188.28 00:45:15.691 clat percentiles (usec): 00:45:15.691 | 1.00th=[ 1287], 5.00th=[ 23200], 10.00th=[ 38536], 20.00th=[ 44827], 00:45:15.691 | 30.00th=[ 47973], 40.00th=[ 55313], 50.00th=[ 61080], 60.00th=[ 65799], 00:45:15.691 | 70.00th=[ 70779], 80.00th=[ 77071], 90.00th=[ 86508], 95.00th=[ 95945], 00:45:15.691 | 99.00th=[115868], 99.50th=[164627], 99.90th=[164627], 99.95th=[168821], 00:45:15.691 | 99.99th=[168821] 00:45:15.691 bw ( KiB/s): min= 632, max= 1128, per=4.07%, avg=990.74, stdev=138.88, samples=19 00:45:15.691 iops : min= 158, max= 282, avg=247.68, stdev=34.72, samples=19 00:45:15.691 lat (msec) : 2=2.42%, 4=0.49%, 10=1.93%, 20=0.11%, 50=28.85% 00:45:15.691 lat (msec) : 100=62.06%, 250=4.13% 00:45:15.691 cpu : usr=42.76%, sys=1.20%, ctx=1544, majf=0, minf=9 00:45:15.691 IO depths : 1=0.3%, 2=1.2%, 4=3.6%, 8=79.8%, 16=15.0%, 32=0.0%, >=64=0.0% 00:45:15.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:15.691 complete : 0=0.0%, 4=87.8%, 8=11.5%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:15.691 issued rwts: total=2641,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:15.691 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:15.691 filename2: (groupid=0, jobs=1): err= 0: pid=83585: Wed Nov 20 14:06:11 2024 00:45:15.691 read: IOPS=239, BW=958KiB/s (981kB/s)(9600KiB/10025msec) 00:45:15.691 slat (usec): min=3, max=8072, avg=47.28, stdev=414.18 00:45:15.691 clat (msec): min=13, max=143, avg=66.63, stdev=20.26 00:45:15.691 lat (msec): min=13, max=143, avg=66.68, stdev=20.26 00:45:15.691 clat percentiles (msec): 00:45:15.691 | 1.00th=[ 18], 5.00th=[ 35], 10.00th=[ 46], 20.00th=[ 49], 00:45:15.691 | 30.00th=[ 59], 40.00th=[ 61], 50.00th=[ 67], 60.00th=[ 71], 00:45:15.691 | 70.00th=[ 73], 80.00th=[ 82], 90.00th=[ 93], 95.00th=[ 100], 00:45:15.691 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:45:15.691 | 99.99th=[ 144] 00:45:15.691 bw ( KiB/s): min= 624, max= 1536, per=3.93%, avg=955.40, stdev=179.21, samples=20 00:45:15.691 iops : min= 156, max= 384, avg=238.80, stdev=44.86, samples=20 00:45:15.691 lat (msec) : 20=1.17%, 50=19.71%, 100=74.25%, 250=4.88% 00:45:15.691 cpu : usr=33.02%, sys=0.64%, ctx=959, majf=0, minf=9 00:45:15.691 IO depths : 1=0.1%, 2=1.3%, 4=5.3%, 8=77.2%, 16=16.2%, 32=0.0%, >=64=0.0% 00:45:15.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:15.691 complete : 0=0.0%, 4=89.2%, 8=9.6%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:15.691 issued rwts: total=2400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:15.691 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:15.691 filename2: (groupid=0, jobs=1): err= 0: pid=83586: Wed Nov 20 14:06:11 2024 00:45:15.691 read: IOPS=248, BW=994KiB/s (1018kB/s)(9944KiB/10005msec) 00:45:15.691 slat (usec): min=3, max=11060, avg=35.07, stdev=364.74 00:45:15.691 clat (msec): min=15, max=144, avg=64.25, stdev=19.91 00:45:15.691 lat (msec): min=15, max=144, avg=64.29, stdev=19.91 00:45:15.691 clat percentiles (msec): 00:45:15.691 | 1.00th=[ 29], 5.00th=[ 36], 10.00th=[ 40], 20.00th=[ 47], 00:45:15.691 | 30.00th=[ 51], 40.00th=[ 60], 50.00th=[ 63], 60.00th=[ 70], 00:45:15.691 | 70.00th=[ 73], 80.00th=[ 80], 90.00th=[ 89], 95.00th=[ 97], 00:45:15.691 | 99.00th=[ 117], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:45:15.691 | 99.99th=[ 144] 00:45:15.691 bw ( KiB/s): min= 640, max= 1152, per=4.02%, avg=977.16, stdev=143.83, samples=19 00:45:15.691 iops : min= 160, max= 288, avg=244.21, stdev=35.91, samples=19 00:45:15.691 lat (msec) : 20=0.28%, 50=29.44%, 100=66.29%, 250=3.98% 00:45:15.691 cpu : usr=36.95%, sys=1.33%, ctx=1098, majf=0, minf=9 00:45:15.691 IO depths : 1=0.1%, 2=1.1%, 4=4.5%, 8=78.9%, 16=15.4%, 32=0.0%, >=64=0.0% 00:45:15.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:15.691 complete : 0=0.0%, 4=88.2%, 8=10.8%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:15.691 issued rwts: total=2486,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:15.691 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:15.691 filename2: (groupid=0, jobs=1): err= 0: pid=83587: Wed Nov 20 14:06:11 2024 00:45:15.691 read: IOPS=254, BW=1019KiB/s (1043kB/s)(9.95MiB/10002msec) 00:45:15.691 slat (usec): min=3, max=8047, avg=31.62, stdev=293.69 00:45:15.691 clat (msec): min=15, max=143, avg=62.70, stdev=17.92 00:45:15.691 lat (msec): min=15, max=143, avg=62.73, stdev=17.92 00:45:15.691 clat percentiles (msec): 00:45:15.691 | 1.00th=[ 28], 5.00th=[ 38], 10.00th=[ 42], 20.00th=[ 47], 00:45:15.691 | 30.00th=[ 51], 40.00th=[ 59], 50.00th=[ 62], 60.00th=[ 67], 00:45:15.691 | 70.00th=[ 72], 80.00th=[ 75], 90.00th=[ 85], 95.00th=[ 94], 00:45:15.691 | 99.00th=[ 116], 99.50th=[ 130], 99.90th=[ 144], 99.95th=[ 144], 00:45:15.691 | 99.99th=[ 144] 00:45:15.691 bw ( KiB/s): min= 792, max= 1128, per=4.13%, avg=1004.47, stdev=88.68, samples=19 00:45:15.691 iops : min= 198, max= 282, avg=251.05, stdev=22.13, samples=19 00:45:15.691 lat (msec) : 20=0.12%, 50=30.15%, 100=67.45%, 250=2.28% 00:45:15.691 cpu : usr=38.75%, sys=1.19%, ctx=1130, majf=0, minf=9 00:45:15.691 IO depths : 1=0.1%, 2=0.4%, 4=1.1%, 8=82.6%, 16=15.9%, 32=0.0%, >=64=0.0% 00:45:15.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:15.691 complete : 0=0.0%, 4=87.3%, 8=12.5%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:15.691 issued rwts: total=2547,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:15.691 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:15.691 filename2: (groupid=0, jobs=1): err= 0: pid=83588: Wed Nov 20 14:06:11 2024 00:45:15.691 read: IOPS=250, BW=1003KiB/s (1027kB/s)(9.82MiB/10030msec) 00:45:15.691 slat (usec): min=5, max=8078, avg=35.17, stdev=320.32 00:45:15.691 clat (msec): min=11, max=144, avg=63.59, stdev=19.35 00:45:15.691 lat (msec): min=11, max=144, avg=63.63, stdev=19.35 00:45:15.691 clat percentiles (msec): 00:45:15.691 | 1.00th=[ 14], 5.00th=[ 35], 10.00th=[ 39], 20.00th=[ 48], 00:45:15.691 | 30.00th=[ 54], 40.00th=[ 61], 50.00th=[ 64], 60.00th=[ 70], 00:45:15.691 | 70.00th=[ 72], 80.00th=[ 77], 90.00th=[ 87], 95.00th=[ 95], 00:45:15.691 | 99.00th=[ 111], 99.50th=[ 132], 99.90th=[ 144], 99.95th=[ 144], 00:45:15.691 | 99.99th=[ 144] 00:45:15.691 bw ( KiB/s): min= 784, max= 1676, per=4.11%, avg=1001.80, stdev=171.38, samples=20 00:45:15.691 iops : min= 196, max= 419, avg=250.45, stdev=42.84, samples=20 00:45:15.691 lat (msec) : 20=1.91%, 50=24.37%, 100=70.82%, 250=2.90% 00:45:15.691 cpu : usr=34.71%, sys=0.85%, ctx=1008, majf=0, minf=9 00:45:15.691 IO depths : 1=0.1%, 2=0.5%, 4=1.6%, 8=81.3%, 16=16.6%, 32=0.0%, >=64=0.0% 00:45:15.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:15.691 complete : 0=0.0%, 4=88.0%, 8=11.6%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:15.691 issued rwts: total=2515,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:15.691 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:15.691 filename2: (groupid=0, jobs=1): err= 0: pid=83589: Wed Nov 20 14:06:11 2024 00:45:15.691 read: IOPS=256, BW=1026KiB/s (1051kB/s)(10.0MiB/10024msec) 00:45:15.691 slat (usec): min=3, max=11996, avg=47.41, stdev=509.60 00:45:15.691 clat (msec): min=21, max=144, avg=62.16, stdev=18.42 00:45:15.691 lat (msec): min=21, max=144, avg=62.20, stdev=18.44 00:45:15.691 clat percentiles (msec): 00:45:15.691 | 1.00th=[ 26], 5.00th=[ 35], 10.00th=[ 39], 20.00th=[ 47], 00:45:15.691 | 30.00th=[ 50], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 68], 00:45:15.691 | 70.00th=[ 72], 80.00th=[ 75], 90.00th=[ 85], 95.00th=[ 94], 00:45:15.691 | 99.00th=[ 113], 99.50th=[ 124], 99.90th=[ 144], 99.95th=[ 144], 00:45:15.691 | 99.99th=[ 144] 00:45:15.691 bw ( KiB/s): min= 872, max= 1280, per=4.20%, avg=1022.30, stdev=95.39, samples=20 00:45:15.691 iops : min= 218, max= 320, avg=255.55, stdev=23.83, samples=20 00:45:15.691 lat (msec) : 50=31.39%, 100=66.55%, 250=2.06% 00:45:15.691 cpu : usr=35.25%, sys=1.02%, ctx=970, majf=0, minf=10 00:45:15.691 IO depths : 1=0.1%, 2=0.6%, 4=2.3%, 8=81.3%, 16=15.8%, 32=0.0%, >=64=0.0% 00:45:15.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:15.691 complete : 0=0.0%, 4=87.7%, 8=11.8%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:15.691 issued rwts: total=2571,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:15.691 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:15.691 filename2: (groupid=0, jobs=1): err= 0: pid=83590: Wed Nov 20 14:06:11 2024 00:45:15.691 read: IOPS=251, BW=1008KiB/s (1032kB/s)(9.88MiB/10040msec) 00:45:15.691 slat (usec): min=4, max=8067, avg=49.73, stdev=447.74 00:45:15.691 clat (msec): min=9, max=150, avg=63.28, stdev=20.04 00:45:15.691 lat (msec): min=9, max=150, avg=63.33, stdev=20.04 00:45:15.691 clat percentiles (msec): 00:45:15.691 | 1.00th=[ 15], 5.00th=[ 25], 10.00th=[ 40], 20.00th=[ 48], 00:45:15.691 | 30.00th=[ 55], 40.00th=[ 61], 50.00th=[ 64], 60.00th=[ 70], 00:45:15.691 | 70.00th=[ 72], 80.00th=[ 78], 90.00th=[ 88], 95.00th=[ 94], 00:45:15.691 | 99.00th=[ 117], 99.50th=[ 129], 99.90th=[ 150], 99.95th=[ 150], 00:45:15.691 | 99.99th=[ 150] 00:45:15.691 bw ( KiB/s): min= 784, max= 1896, per=4.13%, avg=1005.20, stdev=222.96, samples=20 00:45:15.691 iops : min= 196, max= 474, avg=251.30, stdev=55.74, samples=20 00:45:15.691 lat (msec) : 10=0.04%, 20=3.28%, 50=21.70%, 100=72.29%, 250=2.69% 00:45:15.691 cpu : usr=37.18%, sys=0.94%, ctx=1050, majf=0, minf=9 00:45:15.691 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=82.1%, 16=16.9%, 32=0.0%, >=64=0.0% 00:45:15.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:15.691 complete : 0=0.0%, 4=87.9%, 8=11.9%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:15.691 issued rwts: total=2530,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:15.691 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:15.691 filename2: (groupid=0, jobs=1): err= 0: pid=83591: Wed Nov 20 14:06:11 2024 00:45:15.692 read: IOPS=249, BW=997KiB/s (1021kB/s)(9.77MiB/10029msec) 00:45:15.692 slat (usec): min=5, max=8046, avg=31.47, stdev=277.95 00:45:15.692 clat (msec): min=24, max=143, avg=64.02, stdev=18.16 00:45:15.692 lat (msec): min=24, max=143, avg=64.05, stdev=18.16 00:45:15.692 clat percentiles (msec): 00:45:15.692 | 1.00th=[ 29], 5.00th=[ 39], 10.00th=[ 42], 20.00th=[ 48], 00:45:15.692 | 30.00th=[ 52], 40.00th=[ 61], 50.00th=[ 64], 60.00th=[ 70], 00:45:15.692 | 70.00th=[ 72], 80.00th=[ 78], 90.00th=[ 86], 95.00th=[ 96], 00:45:15.692 | 99.00th=[ 120], 99.50th=[ 131], 99.90th=[ 144], 99.95th=[ 144], 00:45:15.692 | 99.99th=[ 144] 00:45:15.692 bw ( KiB/s): min= 752, max= 1168, per=4.08%, avg=993.70, stdev=103.97, samples=20 00:45:15.692 iops : min= 188, max= 292, avg=248.40, stdev=25.95, samples=20 00:45:15.692 lat (msec) : 50=27.36%, 100=69.72%, 250=2.92% 00:45:15.692 cpu : usr=37.00%, sys=1.15%, ctx=1077, majf=0, minf=9 00:45:15.692 IO depths : 1=0.1%, 2=0.7%, 4=2.5%, 8=80.6%, 16=16.0%, 32=0.0%, >=64=0.0% 00:45:15.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:15.692 complete : 0=0.0%, 4=87.9%, 8=11.5%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:15.692 issued rwts: total=2500,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:15.692 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:15.692 filename2: (groupid=0, jobs=1): err= 0: pid=83592: Wed Nov 20 14:06:11 2024 00:45:15.692 read: IOPS=250, BW=1003KiB/s (1027kB/s)(9.82MiB/10018msec) 00:45:15.692 slat (usec): min=4, max=8049, avg=37.81, stdev=262.37 00:45:15.692 clat (msec): min=18, max=148, avg=63.58, stdev=18.46 00:45:15.692 lat (msec): min=18, max=148, avg=63.62, stdev=18.47 00:45:15.692 clat percentiles (msec): 00:45:15.692 | 1.00th=[ 34], 5.00th=[ 37], 10.00th=[ 43], 20.00th=[ 48], 00:45:15.692 | 30.00th=[ 51], 40.00th=[ 59], 50.00th=[ 62], 60.00th=[ 68], 00:45:15.692 | 70.00th=[ 72], 80.00th=[ 79], 90.00th=[ 86], 95.00th=[ 96], 00:45:15.692 | 99.00th=[ 118], 99.50th=[ 125], 99.90th=[ 148], 99.95th=[ 148], 00:45:15.692 | 99.99th=[ 148] 00:45:15.692 bw ( KiB/s): min= 656, max= 1232, per=4.11%, avg=1001.60, stdev=126.64, samples=20 00:45:15.692 iops : min= 164, max= 308, avg=250.40, stdev=31.66, samples=20 00:45:15.692 lat (msec) : 20=0.16%, 50=29.57%, 100=67.01%, 250=3.26% 00:45:15.692 cpu : usr=36.86%, sys=0.85%, ctx=1077, majf=0, minf=9 00:45:15.692 IO depths : 1=0.2%, 2=0.8%, 4=2.5%, 8=80.9%, 16=15.7%, 32=0.0%, >=64=0.0% 00:45:15.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:15.692 complete : 0=0.0%, 4=87.7%, 8=11.8%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:15.692 issued rwts: total=2513,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:15.692 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:15.692 00:45:15.692 Run status group 0 (all jobs): 00:45:15.692 READ: bw=23.8MiB/s (24.9MB/s), 958KiB/s-1077KiB/s (981kB/s-1103kB/s), io=239MiB (251MB), run=10001-10061msec 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:15.692 bdev_null0 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:15.692 [2024-11-20 14:06:11.655162] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:15.692 bdev_null1 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:45:15.692 14:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:15.693 14:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:15.693 14:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:15.693 14:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:45:15.693 14:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:15.693 14:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:15.693 14:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:15.693 14:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:45:15.693 14:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:45:15.693 14:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:45:15.693 14:06:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:45:15.693 14:06:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:45:15.693 14:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:15.693 14:06:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:15.693 14:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:15.693 14:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:45:15.693 14:06:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:15.693 { 00:45:15.693 "params": { 00:45:15.693 "name": "Nvme$subsystem", 00:45:15.693 "trtype": "$TEST_TRANSPORT", 00:45:15.693 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:15.693 "adrfam": "ipv4", 00:45:15.693 "trsvcid": "$NVMF_PORT", 00:45:15.693 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:15.693 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:15.693 "hdgst": ${hdgst:-false}, 00:45:15.693 "ddgst": ${ddgst:-false} 00:45:15.693 }, 00:45:15.693 "method": "bdev_nvme_attach_controller" 00:45:15.693 } 00:45:15.693 EOF 00:45:15.693 )") 00:45:15.693 14:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:45:15.693 14:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:45:15.693 14:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:45:15.693 14:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:15.693 14:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:45:15.693 14:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:45:15.693 14:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:45:15.693 14:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:45:15.693 14:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:45:15.693 14:06:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:45:15.693 14:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:45:15.693 14:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:45:15.693 14:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:15.693 14:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:45:15.693 14:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:45:15.693 14:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:45:15.693 14:06:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:15.693 14:06:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:15.693 { 00:45:15.693 "params": { 00:45:15.693 "name": "Nvme$subsystem", 00:45:15.693 "trtype": "$TEST_TRANSPORT", 00:45:15.693 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:15.693 "adrfam": "ipv4", 00:45:15.693 "trsvcid": "$NVMF_PORT", 00:45:15.693 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:15.693 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:15.693 "hdgst": ${hdgst:-false}, 00:45:15.693 "ddgst": ${ddgst:-false} 00:45:15.693 }, 00:45:15.693 "method": "bdev_nvme_attach_controller" 00:45:15.693 } 00:45:15.693 EOF 00:45:15.693 )") 00:45:15.693 14:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:45:15.693 14:06:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:15.693 14:06:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:45:15.693 14:06:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:45:15.693 14:06:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:45:15.693 14:06:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:45:15.693 "params": { 00:45:15.693 "name": "Nvme0", 00:45:15.693 "trtype": "tcp", 00:45:15.693 "traddr": "10.0.0.3", 00:45:15.693 "adrfam": "ipv4", 00:45:15.693 "trsvcid": "4420", 00:45:15.693 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:15.693 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:15.693 "hdgst": false, 00:45:15.693 "ddgst": false 00:45:15.693 }, 00:45:15.693 "method": "bdev_nvme_attach_controller" 00:45:15.693 },{ 00:45:15.693 "params": { 00:45:15.693 "name": "Nvme1", 00:45:15.693 "trtype": "tcp", 00:45:15.693 "traddr": "10.0.0.3", 00:45:15.693 "adrfam": "ipv4", 00:45:15.693 "trsvcid": "4420", 00:45:15.693 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:45:15.693 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:45:15.693 "hdgst": false, 00:45:15.693 "ddgst": false 00:45:15.693 }, 00:45:15.693 "method": "bdev_nvme_attach_controller" 00:45:15.693 }' 00:45:15.693 14:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:45:15.693 14:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:45:15.693 14:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:45:15.693 14:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:45:15.693 14:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:45:15.693 14:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:45:15.693 14:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:45:15.693 14:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:45:15.693 14:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:45:15.693 14:06:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:15.693 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:45:15.693 ... 00:45:15.693 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:45:15.693 ... 00:45:15.693 fio-3.35 00:45:15.693 Starting 4 threads 00:45:20.971 00:45:20.971 filename0: (groupid=0, jobs=1): err= 0: pid=83724: Wed Nov 20 14:06:17 2024 00:45:20.971 read: IOPS=2071, BW=16.2MiB/s (17.0MB/s)(80.9MiB/5002msec) 00:45:20.971 slat (nsec): min=5920, max=97216, avg=22802.85, stdev=13340.70 00:45:20.971 clat (usec): min=391, max=7301, avg=3780.14, stdev=889.05 00:45:20.971 lat (usec): min=401, max=7347, avg=3802.94, stdev=888.84 00:45:20.971 clat percentiles (usec): 00:45:20.971 | 1.00th=[ 1762], 5.00th=[ 2212], 10.00th=[ 2507], 20.00th=[ 3064], 00:45:20.971 | 30.00th=[ 3359], 40.00th=[ 3621], 50.00th=[ 3818], 60.00th=[ 4047], 00:45:20.971 | 70.00th=[ 4293], 80.00th=[ 4490], 90.00th=[ 4883], 95.00th=[ 5145], 00:45:20.971 | 99.00th=[ 5669], 99.50th=[ 5932], 99.90th=[ 6783], 99.95th=[ 6849], 00:45:20.971 | 99.99th=[ 7177] 00:45:20.971 bw ( KiB/s): min=14320, max=17696, per=22.28%, avg=16374.78, stdev=1103.42, samples=9 00:45:20.971 iops : min= 1790, max= 2212, avg=2046.78, stdev=137.87, samples=9 00:45:20.971 lat (usec) : 500=0.01%, 750=0.04%, 1000=0.09% 00:45:20.971 lat (msec) : 2=2.54%, 4=55.02%, 10=42.30% 00:45:20.971 cpu : usr=96.86%, sys=2.48%, ctx=6, majf=0, minf=1 00:45:20.971 IO depths : 1=2.2%, 2=14.7%, 4=55.9%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:20.971 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:20.971 complete : 0=0.0%, 4=94.1%, 8=5.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:20.971 issued rwts: total=10361,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:20.971 latency : target=0, window=0, percentile=100.00%, depth=8 00:45:20.971 filename0: (groupid=0, jobs=1): err= 0: pid=83725: Wed Nov 20 14:06:17 2024 00:45:20.971 read: IOPS=2113, BW=16.5MiB/s (17.3MB/s)(82.6MiB/5001msec) 00:45:20.971 slat (usec): min=5, max=400, avg=23.15, stdev=14.24 00:45:20.971 clat (usec): min=388, max=6917, avg=3705.00, stdev=890.60 00:45:20.971 lat (usec): min=400, max=6930, avg=3728.15, stdev=890.47 00:45:20.971 clat percentiles (usec): 00:45:20.971 | 1.00th=[ 1319], 5.00th=[ 2114], 10.00th=[ 2442], 20.00th=[ 2999], 00:45:20.971 | 30.00th=[ 3294], 40.00th=[ 3556], 50.00th=[ 3720], 60.00th=[ 3982], 00:45:20.971 | 70.00th=[ 4228], 80.00th=[ 4490], 90.00th=[ 4752], 95.00th=[ 5080], 00:45:20.971 | 99.00th=[ 5538], 99.50th=[ 5735], 99.90th=[ 6194], 99.95th=[ 6587], 00:45:20.971 | 99.99th=[ 6849] 00:45:20.971 bw ( KiB/s): min=14192, max=17792, per=22.79%, avg=16746.67, stdev=1145.26, samples=9 00:45:20.971 iops : min= 1774, max= 2224, avg=2093.33, stdev=143.16, samples=9 00:45:20.971 lat (usec) : 500=0.02%, 750=0.04%, 1000=0.06% 00:45:20.971 lat (msec) : 2=3.46%, 4=57.21%, 10=39.21% 00:45:20.971 cpu : usr=96.60%, sys=2.64%, ctx=22, majf=0, minf=0 00:45:20.971 IO depths : 1=2.0%, 2=12.9%, 4=57.0%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:20.971 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:20.971 complete : 0=0.0%, 4=94.7%, 8=5.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:20.972 issued rwts: total=10571,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:20.972 latency : target=0, window=0, percentile=100.00%, depth=8 00:45:20.972 filename1: (groupid=0, jobs=1): err= 0: pid=83726: Wed Nov 20 14:06:17 2024 00:45:20.972 read: IOPS=2612, BW=20.4MiB/s (21.4MB/s)(102MiB/5002msec) 00:45:20.972 slat (nsec): min=5574, max=77150, avg=13277.85, stdev=8596.56 00:45:20.972 clat (usec): min=477, max=6964, avg=3025.71, stdev=957.87 00:45:20.972 lat (usec): min=495, max=7006, avg=3038.98, stdev=958.97 00:45:20.972 clat percentiles (usec): 00:45:20.972 | 1.00th=[ 1029], 5.00th=[ 1549], 10.00th=[ 1713], 20.00th=[ 2057], 00:45:20.972 | 30.00th=[ 2474], 40.00th=[ 2835], 50.00th=[ 3064], 60.00th=[ 3326], 00:45:20.972 | 70.00th=[ 3589], 80.00th=[ 3949], 90.00th=[ 4293], 95.00th=[ 4490], 00:45:20.972 | 99.00th=[ 4817], 99.50th=[ 5014], 99.90th=[ 5669], 99.95th=[ 5866], 00:45:20.972 | 99.99th=[ 6915] 00:45:20.972 bw ( KiB/s): min=18634, max=24301, per=27.81%, avg=20432.78, stdev=2061.38, samples=9 00:45:20.972 iops : min= 2329, max= 3037, avg=2554.00, stdev=257.55, samples=9 00:45:20.972 lat (usec) : 500=0.01%, 750=0.22%, 1000=0.69% 00:45:20.972 lat (msec) : 2=17.53%, 4=62.88%, 10=18.67% 00:45:20.972 cpu : usr=94.64%, sys=4.60%, ctx=8, majf=0, minf=0 00:45:20.972 IO depths : 1=0.4%, 2=3.8%, 4=63.0%, 8=32.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:20.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:20.972 complete : 0=0.0%, 4=98.5%, 8=1.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:20.972 issued rwts: total=13069,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:20.972 latency : target=0, window=0, percentile=100.00%, depth=8 00:45:20.972 filename1: (groupid=0, jobs=1): err= 0: pid=83727: Wed Nov 20 14:06:17 2024 00:45:20.972 read: IOPS=2388, BW=18.7MiB/s (19.6MB/s)(93.4MiB/5003msec) 00:45:20.972 slat (usec): min=5, max=104, avg=13.08, stdev= 8.70 00:45:20.972 clat (usec): min=462, max=6965, avg=3309.44, stdev=1076.05 00:45:20.972 lat (usec): min=476, max=6990, avg=3322.52, stdev=1077.19 00:45:20.972 clat percentiles (usec): 00:45:20.972 | 1.00th=[ 1020], 5.00th=[ 1582], 10.00th=[ 1713], 20.00th=[ 2245], 00:45:20.972 | 30.00th=[ 2835], 40.00th=[ 3097], 50.00th=[ 3326], 60.00th=[ 3589], 00:45:20.972 | 70.00th=[ 3884], 80.00th=[ 4359], 90.00th=[ 4686], 95.00th=[ 5014], 00:45:20.972 | 99.00th=[ 5538], 99.50th=[ 5669], 99.90th=[ 5866], 99.95th=[ 5932], 00:45:20.972 | 99.99th=[ 6259] 00:45:20.972 bw ( KiB/s): min=14848, max=23169, per=25.63%, avg=18832.44, stdev=2743.68, samples=9 00:45:20.972 iops : min= 1856, max= 2896, avg=2354.00, stdev=342.88, samples=9 00:45:20.972 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.85% 00:45:20.972 lat (msec) : 2=14.69%, 4=56.64%, 10=27.80% 00:45:20.972 cpu : usr=95.36%, sys=3.92%, ctx=63, majf=0, minf=0 00:45:20.972 IO depths : 1=0.5%, 2=9.1%, 4=59.6%, 8=30.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:20.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:20.972 complete : 0=0.0%, 4=96.5%, 8=3.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:20.972 issued rwts: total=11949,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:20.972 latency : target=0, window=0, percentile=100.00%, depth=8 00:45:20.972 00:45:20.972 Run status group 0 (all jobs): 00:45:20.972 READ: bw=71.8MiB/s (75.2MB/s), 16.2MiB/s-20.4MiB/s (17.0MB/s-21.4MB/s), io=359MiB (376MB), run=5001-5003msec 00:45:20.972 14:06:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:45:20.972 14:06:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:45:20.972 14:06:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:20.972 14:06:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:20.972 14:06:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:45:20.972 14:06:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:20.972 14:06:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:20.972 14:06:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:20.972 14:06:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:20.972 14:06:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:20.972 14:06:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:20.972 14:06:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:20.972 14:06:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:20.972 14:06:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:20.972 14:06:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:45:20.972 14:06:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:45:20.972 14:06:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:20.972 14:06:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:20.972 14:06:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:20.972 14:06:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:20.972 14:06:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:45:20.972 14:06:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:20.972 14:06:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:20.972 14:06:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:20.972 00:45:20.972 real 0m23.921s 00:45:20.972 user 2m7.427s 00:45:20.972 sys 0m4.832s 00:45:20.972 14:06:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:20.972 14:06:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:20.972 ************************************ 00:45:20.972 END TEST fio_dif_rand_params 00:45:20.972 ************************************ 00:45:20.972 14:06:17 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:45:20.972 14:06:17 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:45:20.972 14:06:17 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:20.972 14:06:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:20.972 ************************************ 00:45:20.972 START TEST fio_dif_digest 00:45:20.972 ************************************ 00:45:20.972 14:06:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:45:20.972 14:06:17 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:45:20.972 14:06:17 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:45:20.972 14:06:17 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:45:20.972 14:06:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:45:20.972 14:06:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:45:20.972 14:06:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:45:20.972 14:06:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:45:20.972 14:06:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:45:20.972 14:06:17 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:45:20.972 14:06:17 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:45:20.972 14:06:17 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:45:20.972 14:06:17 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:45:20.972 14:06:17 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:45:20.972 14:06:17 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:45:20.972 14:06:17 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:45:20.972 14:06:17 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:45:20.972 14:06:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:20.972 14:06:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:20.972 bdev_null0 00:45:20.972 14:06:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:20.972 14:06:17 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:20.972 14:06:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:20.972 14:06:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:20.972 14:06:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:20.972 14:06:17 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:20.972 14:06:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:20.972 14:06:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:20.972 14:06:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:20.972 14:06:17 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:45:20.972 14:06:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:20.972 14:06:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:20.972 [2024-11-20 14:06:17.980125] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:45:20.972 14:06:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:20.972 14:06:17 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:45:20.972 14:06:17 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:45:20.972 14:06:17 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:45:20.972 14:06:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:45:20.972 14:06:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:45:20.972 14:06:17 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:20.972 14:06:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:20.972 14:06:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:20.972 { 00:45:20.972 "params": { 00:45:20.972 "name": "Nvme$subsystem", 00:45:20.972 "trtype": "$TEST_TRANSPORT", 00:45:20.972 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:20.972 "adrfam": "ipv4", 00:45:20.972 "trsvcid": "$NVMF_PORT", 00:45:20.972 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:20.972 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:20.972 "hdgst": ${hdgst:-false}, 00:45:20.972 "ddgst": ${ddgst:-false} 00:45:20.972 }, 00:45:20.972 "method": "bdev_nvme_attach_controller" 00:45:20.972 } 00:45:20.972 EOF 00:45:20.972 )") 00:45:20.972 14:06:17 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:45:20.972 14:06:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:20.972 14:06:17 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:45:20.973 14:06:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:45:20.973 14:06:17 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:45:20.973 14:06:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:20.973 14:06:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:45:20.973 14:06:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:45:20.973 14:06:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:45:20.973 14:06:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:45:20.973 14:06:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:45:20.973 14:06:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:45:20.973 14:06:17 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:45:20.973 14:06:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:45:20.973 14:06:17 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:45:20.973 14:06:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:45:20.973 14:06:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:45:20.973 14:06:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:45:20.973 14:06:18 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:45:20.973 14:06:18 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:45:20.973 "params": { 00:45:20.973 "name": "Nvme0", 00:45:20.973 "trtype": "tcp", 00:45:20.973 "traddr": "10.0.0.3", 00:45:20.973 "adrfam": "ipv4", 00:45:20.973 "trsvcid": "4420", 00:45:20.973 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:20.973 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:20.973 "hdgst": true, 00:45:20.973 "ddgst": true 00:45:20.973 }, 00:45:20.973 "method": "bdev_nvme_attach_controller" 00:45:20.973 }' 00:45:20.973 14:06:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:45:20.973 14:06:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:45:20.973 14:06:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:45:20.973 14:06:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:45:20.973 14:06:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:45:20.973 14:06:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:45:20.973 14:06:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:45:20.973 14:06:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:45:20.973 14:06:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:45:20.973 14:06:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:20.973 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:45:20.973 ... 00:45:20.973 fio-3.35 00:45:20.973 Starting 3 threads 00:45:33.210 00:45:33.210 filename0: (groupid=0, jobs=1): err= 0: pid=83834: Wed Nov 20 14:06:28 2024 00:45:33.210 read: IOPS=280, BW=35.0MiB/s (36.7MB/s)(350MiB/10001msec) 00:45:33.210 slat (nsec): min=6181, max=83686, avg=25395.48, stdev=14573.79 00:45:33.210 clat (usec): min=7092, max=12593, avg=10643.80, stdev=477.62 00:45:33.210 lat (usec): min=7105, max=12631, avg=10669.20, stdev=478.82 00:45:33.210 clat percentiles (usec): 00:45:33.210 | 1.00th=[10028], 5.00th=[10159], 10.00th=[10159], 20.00th=[10159], 00:45:33.210 | 30.00th=[10290], 40.00th=[10290], 50.00th=[10421], 60.00th=[10814], 00:45:33.210 | 70.00th=[10945], 80.00th=[11076], 90.00th=[11338], 95.00th=[11469], 00:45:33.210 | 99.00th=[11600], 99.50th=[11731], 99.90th=[12518], 99.95th=[12518], 00:45:33.210 | 99.99th=[12649] 00:45:33.210 bw ( KiB/s): min=33792, max=37632, per=33.34%, avg=35890.05, stdev=1397.99, samples=19 00:45:33.210 iops : min= 264, max= 294, avg=280.37, stdev=10.92, samples=19 00:45:33.210 lat (msec) : 10=0.11%, 20=99.89% 00:45:33.210 cpu : usr=97.68%, sys=1.90%, ctx=7, majf=0, minf=0 00:45:33.210 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:33.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:33.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:33.210 issued rwts: total=2802,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:33.210 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:33.210 filename0: (groupid=0, jobs=1): err= 0: pid=83835: Wed Nov 20 14:06:28 2024 00:45:33.210 read: IOPS=280, BW=35.0MiB/s (36.7MB/s)(350MiB/10001msec) 00:45:33.210 slat (nsec): min=6120, max=83617, avg=25105.55, stdev=14167.43 00:45:33.210 clat (usec): min=7098, max=12581, avg=10644.64, stdev=477.49 00:45:33.210 lat (usec): min=7109, max=12617, avg=10669.75, stdev=478.60 00:45:33.210 clat percentiles (usec): 00:45:33.210 | 1.00th=[10028], 5.00th=[10159], 10.00th=[10159], 20.00th=[10159], 00:45:33.210 | 30.00th=[10290], 40.00th=[10290], 50.00th=[10421], 60.00th=[10814], 00:45:33.210 | 70.00th=[10945], 80.00th=[11076], 90.00th=[11338], 95.00th=[11469], 00:45:33.210 | 99.00th=[11600], 99.50th=[11731], 99.90th=[12518], 99.95th=[12518], 00:45:33.210 | 99.99th=[12518] 00:45:33.210 bw ( KiB/s): min=33792, max=37632, per=33.34%, avg=35890.05, stdev=1397.99, samples=19 00:45:33.210 iops : min= 264, max= 294, avg=280.37, stdev=10.92, samples=19 00:45:33.210 lat (msec) : 10=0.11%, 20=99.89% 00:45:33.210 cpu : usr=97.56%, sys=2.00%, ctx=23, majf=0, minf=0 00:45:33.210 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:33.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:33.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:33.210 issued rwts: total=2802,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:33.210 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:33.210 filename0: (groupid=0, jobs=1): err= 0: pid=83836: Wed Nov 20 14:06:28 2024 00:45:33.210 read: IOPS=280, BW=35.1MiB/s (36.8MB/s)(351MiB/10003msec) 00:45:33.210 slat (nsec): min=6335, max=60207, avg=14849.62, stdev=7509.40 00:45:33.210 clat (usec): min=4056, max=12224, avg=10649.11, stdev=556.61 00:45:33.210 lat (usec): min=4078, max=12257, avg=10663.96, stdev=556.61 00:45:33.210 clat percentiles (usec): 00:45:33.210 | 1.00th=[10028], 5.00th=[10159], 10.00th=[10159], 20.00th=[10290], 00:45:33.210 | 30.00th=[10290], 40.00th=[10290], 50.00th=[10421], 60.00th=[10814], 00:45:33.210 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11338], 95.00th=[11469], 00:45:33.210 | 99.00th=[11600], 99.50th=[11731], 99.90th=[12256], 99.95th=[12256], 00:45:33.210 | 99.99th=[12256] 00:45:33.210 bw ( KiB/s): min=33792, max=37632, per=33.42%, avg=35974.74, stdev=1454.10, samples=19 00:45:33.210 iops : min= 264, max= 294, avg=281.05, stdev=11.36, samples=19 00:45:33.210 lat (msec) : 10=0.64%, 20=99.36% 00:45:33.210 cpu : usr=90.44%, sys=8.96%, ctx=111, majf=0, minf=0 00:45:33.210 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:33.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:33.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:33.210 issued rwts: total=2808,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:33.210 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:33.210 00:45:33.210 Run status group 0 (all jobs): 00:45:33.210 READ: bw=105MiB/s (110MB/s), 35.0MiB/s-35.1MiB/s (36.7MB/s-36.8MB/s), io=1052MiB (1103MB), run=10001-10003msec 00:45:33.210 14:06:29 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:45:33.210 14:06:29 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:45:33.210 14:06:29 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:45:33.210 14:06:29 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:33.210 14:06:29 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:45:33.211 14:06:29 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:33.211 14:06:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:33.211 14:06:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:33.211 14:06:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:33.211 14:06:29 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:33.211 14:06:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:33.211 14:06:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:33.211 14:06:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:33.211 00:45:33.211 real 0m11.110s 00:45:33.211 user 0m29.319s 00:45:33.211 sys 0m1.636s 00:45:33.211 14:06:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:33.211 14:06:29 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:33.211 ************************************ 00:45:33.211 END TEST fio_dif_digest 00:45:33.211 ************************************ 00:45:33.211 14:06:29 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:45:33.211 14:06:29 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:45:33.211 14:06:29 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:45:33.211 14:06:29 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:45:33.211 14:06:29 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:45:33.211 14:06:29 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:45:33.211 14:06:29 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:45:33.211 14:06:29 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:45:33.211 rmmod nvme_tcp 00:45:33.211 rmmod nvme_fabrics 00:45:33.211 rmmod nvme_keyring 00:45:33.211 14:06:29 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:45:33.211 14:06:29 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:45:33.211 14:06:29 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:45:33.211 14:06:29 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 83075 ']' 00:45:33.211 14:06:29 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 83075 00:45:33.211 14:06:29 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 83075 ']' 00:45:33.211 14:06:29 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 83075 00:45:33.211 14:06:29 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:45:33.211 14:06:29 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:33.211 14:06:29 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83075 00:45:33.211 14:06:29 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:45:33.211 14:06:29 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:45:33.211 killing process with pid 83075 00:45:33.211 14:06:29 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83075' 00:45:33.211 14:06:29 nvmf_dif -- common/autotest_common.sh@973 -- # kill 83075 00:45:33.211 14:06:29 nvmf_dif -- common/autotest_common.sh@978 -- # wait 83075 00:45:33.211 14:06:29 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:45:33.211 14:06:29 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:45:33.211 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:45:33.211 Waiting for block devices as requested 00:45:33.211 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:45:33.211 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:45:33.211 14:06:30 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:45:33.211 14:06:30 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:45:33.211 14:06:30 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:45:33.211 14:06:30 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:45:33.211 14:06:30 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:45:33.211 14:06:30 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:45:33.211 14:06:30 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:45:33.211 14:06:30 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:45:33.211 14:06:30 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:45:33.211 14:06:30 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:45:33.211 14:06:30 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:45:33.211 14:06:30 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:45:33.211 14:06:30 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:45:33.211 14:06:30 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:45:33.211 14:06:30 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:45:33.211 14:06:30 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:45:33.211 14:06:30 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:45:33.211 14:06:30 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:45:33.211 14:06:30 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:45:33.211 14:06:30 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:45:33.211 14:06:30 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:45:33.211 14:06:30 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:45:33.211 14:06:30 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:33.211 14:06:30 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:45:33.211 14:06:30 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:33.211 14:06:30 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:45:33.211 00:45:33.211 real 1m1.454s 00:45:33.211 user 3m55.028s 00:45:33.211 sys 0m15.507s 00:45:33.211 14:06:30 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:33.211 14:06:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:33.211 ************************************ 00:45:33.211 END TEST nvmf_dif 00:45:33.211 ************************************ 00:45:33.470 14:06:30 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:45:33.470 14:06:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:45:33.470 14:06:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:33.470 14:06:30 -- common/autotest_common.sh@10 -- # set +x 00:45:33.470 ************************************ 00:45:33.470 START TEST nvmf_abort_qd_sizes 00:45:33.470 ************************************ 00:45:33.470 14:06:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:45:33.470 * Looking for test storage... 00:45:33.470 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:45:33.470 14:06:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:45:33.470 14:06:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:45:33.470 14:06:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:45:33.730 14:06:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:45:33.730 14:06:30 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:45:33.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:33.731 --rc genhtml_branch_coverage=1 00:45:33.731 --rc genhtml_function_coverage=1 00:45:33.731 --rc genhtml_legend=1 00:45:33.731 --rc geninfo_all_blocks=1 00:45:33.731 --rc geninfo_unexecuted_blocks=1 00:45:33.731 00:45:33.731 ' 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:45:33.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:33.731 --rc genhtml_branch_coverage=1 00:45:33.731 --rc genhtml_function_coverage=1 00:45:33.731 --rc genhtml_legend=1 00:45:33.731 --rc geninfo_all_blocks=1 00:45:33.731 --rc geninfo_unexecuted_blocks=1 00:45:33.731 00:45:33.731 ' 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:45:33.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:33.731 --rc genhtml_branch_coverage=1 00:45:33.731 --rc genhtml_function_coverage=1 00:45:33.731 --rc genhtml_legend=1 00:45:33.731 --rc geninfo_all_blocks=1 00:45:33.731 --rc geninfo_unexecuted_blocks=1 00:45:33.731 00:45:33.731 ' 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:45:33.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:33.731 --rc genhtml_branch_coverage=1 00:45:33.731 --rc genhtml_function_coverage=1 00:45:33.731 --rc genhtml_legend=1 00:45:33.731 --rc geninfo_all_blocks=1 00:45:33.731 --rc geninfo_unexecuted_blocks=1 00:45:33.731 00:45:33.731 ' 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=105ec898-1662-46bd-85be-b241e399edb9 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:45:33.731 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:45:33.731 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:45:33.732 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:45:33.732 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:45:33.732 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:45:33.732 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:45:33.732 Cannot find device "nvmf_init_br" 00:45:33.732 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:45:33.732 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:45:33.732 Cannot find device "nvmf_init_br2" 00:45:33.732 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:45:33.732 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:45:33.732 Cannot find device "nvmf_tgt_br" 00:45:33.732 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:45:33.732 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:45:33.732 Cannot find device "nvmf_tgt_br2" 00:45:33.732 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:45:33.732 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:45:33.732 Cannot find device "nvmf_init_br" 00:45:33.732 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:45:33.732 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:45:33.732 Cannot find device "nvmf_init_br2" 00:45:33.732 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:45:33.732 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:45:33.732 Cannot find device "nvmf_tgt_br" 00:45:33.732 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:45:33.732 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:45:33.732 Cannot find device "nvmf_tgt_br2" 00:45:33.732 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:45:33.732 14:06:30 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:45:33.732 Cannot find device "nvmf_br" 00:45:33.732 14:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:45:33.732 14:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:45:33.732 Cannot find device "nvmf_init_if" 00:45:33.732 14:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:45:33.732 14:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:45:33.732 Cannot find device "nvmf_init_if2" 00:45:33.732 14:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:45:33.732 14:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:45:33.992 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:45:33.992 14:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:45:33.992 14:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:45:33.992 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:45:33.992 14:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:45:33.992 14:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:45:33.992 14:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:45:33.992 14:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:45:33.992 14:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:45:33.992 14:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:45:33.992 14:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:45:33.992 14:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:45:33.992 14:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:45:33.992 14:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:45:33.992 14:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:45:33.992 14:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:45:33.992 14:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:45:33.992 14:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:45:33.992 14:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:45:33.992 14:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:45:33.992 14:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:45:33.992 14:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:45:33.992 14:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:45:33.992 14:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:45:33.992 14:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:45:33.992 14:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:45:33.992 14:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:45:33.992 14:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:45:33.992 14:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:45:33.992 14:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:45:33.992 14:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:45:33.992 14:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:45:33.992 14:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:45:33.992 14:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:45:33.992 14:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:45:33.992 14:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:45:33.992 14:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:45:33.992 14:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:45:33.992 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:45:33.992 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:45:33.992 00:45:33.992 --- 10.0.0.3 ping statistics --- 00:45:33.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:33.992 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:45:33.992 14:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:45:33.992 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:45:33.992 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.064 ms 00:45:33.992 00:45:33.992 --- 10.0.0.4 ping statistics --- 00:45:33.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:33.992 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:45:33.992 14:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:45:33.992 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:45:33.992 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:45:33.992 00:45:33.992 --- 10.0.0.1 ping statistics --- 00:45:33.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:33.992 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:45:33.992 14:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:45:33.992 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:45:33.992 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:45:33.992 00:45:33.992 --- 10.0.0.2 ping statistics --- 00:45:33.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:33.992 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:45:33.992 14:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:45:33.992 14:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:45:33.992 14:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:45:33.992 14:06:31 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:45:34.932 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:45:34.932 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:45:34.932 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:45:35.192 14:06:32 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:45:35.192 14:06:32 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:45:35.192 14:06:32 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:45:35.192 14:06:32 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:45:35.192 14:06:32 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:45:35.192 14:06:32 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:45:35.192 14:06:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:45:35.192 14:06:32 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:45:35.192 14:06:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:45:35.192 14:06:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:45:35.192 14:06:32 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=84498 00:45:35.192 14:06:32 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 84498 00:45:35.192 14:06:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 84498 ']' 00:45:35.192 14:06:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:35.192 14:06:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:35.192 14:06:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:35.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:35.192 14:06:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:35.193 14:06:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:45:35.193 14:06:32 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:45:35.193 [2024-11-20 14:06:32.396776] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:45:35.193 [2024-11-20 14:06:32.396839] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:45:35.452 [2024-11-20 14:06:32.545844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:45:35.452 [2024-11-20 14:06:32.603471] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:45:35.452 [2024-11-20 14:06:32.603520] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:45:35.452 [2024-11-20 14:06:32.603527] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:45:35.452 [2024-11-20 14:06:32.603532] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:45:35.452 [2024-11-20 14:06:32.603536] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:45:35.452 [2024-11-20 14:06:32.604404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:45:35.452 [2024-11-20 14:06:32.604701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:35.452 [2024-11-20 14:06:32.604543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:45:35.452 [2024-11-20 14:06:32.604785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:45:35.452 [2024-11-20 14:06:32.647435] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:45:36.022 14:06:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:36.022 14:06:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:45:36.022 14:06:33 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:45:36.022 14:06:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:45:36.022 14:06:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:45:36.022 14:06:33 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:45:36.022 14:06:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:45:36.022 14:06:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:45:36.022 14:06:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:45:36.022 14:06:33 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:45:36.022 14:06:33 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:45:36.022 14:06:33 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:45:36.022 14:06:33 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:45:36.022 14:06:33 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:45:36.022 14:06:33 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:45:36.022 14:06:33 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:45:36.022 14:06:33 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:45:36.022 14:06:33 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:45:36.022 14:06:33 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:45:36.022 14:06:33 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:45:36.022 14:06:33 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:45:36.022 14:06:33 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:45:36.022 14:06:33 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:45:36.022 14:06:33 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:45:36.022 14:06:33 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:45:36.022 14:06:33 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:45:36.022 14:06:33 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:45:36.022 14:06:33 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:45:36.022 14:06:33 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:45:36.022 14:06:33 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:45:36.022 14:06:33 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:45:36.022 14:06:33 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:45:36.282 14:06:33 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:45:36.282 14:06:33 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:45:36.282 14:06:33 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:45:36.282 14:06:33 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:45:36.282 14:06:33 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:45:36.282 14:06:33 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:45:36.282 14:06:33 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:45:36.282 14:06:33 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:45:36.282 14:06:33 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:45:36.282 14:06:33 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:45:36.282 14:06:33 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:45:36.282 14:06:33 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:45:36.282 14:06:33 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:45:36.282 14:06:33 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:45:36.282 14:06:33 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:45:36.282 14:06:33 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:45:36.282 14:06:33 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:45:36.282 14:06:33 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:45:36.282 14:06:33 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:45:36.282 14:06:33 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:45:36.282 14:06:33 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:45:36.282 14:06:33 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:45:36.282 14:06:33 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:45:36.282 14:06:33 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:45:36.282 14:06:33 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:45:36.282 14:06:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:45:36.282 14:06:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:45:36.282 14:06:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:45:36.282 14:06:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:45:36.282 14:06:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:36.282 14:06:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:45:36.282 ************************************ 00:45:36.282 START TEST spdk_target_abort 00:45:36.282 ************************************ 00:45:36.282 14:06:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:45:36.282 14:06:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:45:36.282 14:06:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:45:36.282 14:06:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:36.282 14:06:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:45:36.282 spdk_targetn1 00:45:36.282 14:06:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:36.282 14:06:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:45:36.282 14:06:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:36.282 14:06:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:45:36.282 [2024-11-20 14:06:33.456702] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:36.282 14:06:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:36.282 14:06:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:45:36.282 14:06:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:36.282 14:06:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:45:36.282 14:06:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:36.282 14:06:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:45:36.282 14:06:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:36.282 14:06:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:45:36.282 14:06:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:36.283 14:06:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:45:36.283 14:06:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:36.283 14:06:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:45:36.283 [2024-11-20 14:06:33.507083] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:45:36.283 14:06:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:36.283 14:06:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:45:36.283 14:06:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:45:36.283 14:06:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:45:36.283 14:06:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:45:36.283 14:06:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:45:36.283 14:06:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:45:36.283 14:06:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:45:36.283 14:06:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:45:36.283 14:06:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:45:36.283 14:06:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:36.283 14:06:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:45:36.283 14:06:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:36.283 14:06:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:45:36.283 14:06:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:36.283 14:06:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:45:36.283 14:06:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:36.283 14:06:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:45:36.283 14:06:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:36.283 14:06:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:45:36.283 14:06:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:45:36.283 14:06:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:45:39.576 Initializing NVMe Controllers 00:45:39.576 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:45:39.576 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:45:39.576 Initialization complete. Launching workers. 00:45:39.576 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11089, failed: 0 00:45:39.576 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1042, failed to submit 10047 00:45:39.576 success 728, unsuccessful 314, failed 0 00:45:39.576 14:06:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:45:39.576 14:06:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:45:42.862 Initializing NVMe Controllers 00:45:42.862 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:45:42.862 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:45:42.862 Initialization complete. Launching workers. 00:45:42.862 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9000, failed: 0 00:45:42.862 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1176, failed to submit 7824 00:45:42.862 success 381, unsuccessful 795, failed 0 00:45:42.862 14:06:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:45:42.862 14:06:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:45:46.176 Initializing NVMe Controllers 00:45:46.176 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:45:46.176 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:45:46.176 Initialization complete. Launching workers. 00:45:46.176 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31834, failed: 0 00:45:46.176 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2362, failed to submit 29472 00:45:46.176 success 460, unsuccessful 1902, failed 0 00:45:46.177 14:06:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:45:46.177 14:06:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:46.177 14:06:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:45:46.177 14:06:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:46.177 14:06:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:45:46.177 14:06:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:46.177 14:06:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:45:48.075 14:06:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:48.075 14:06:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 84498 00:45:48.075 14:06:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 84498 ']' 00:45:48.075 14:06:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 84498 00:45:48.075 14:06:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:45:48.075 14:06:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:48.075 14:06:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84498 00:45:48.075 14:06:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:45:48.075 14:06:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:45:48.075 killing process with pid 84498 00:45:48.075 14:06:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84498' 00:45:48.075 14:06:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 84498 00:45:48.075 14:06:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 84498 00:45:48.075 00:45:48.075 real 0m11.748s 00:45:48.075 user 0m47.739s 00:45:48.075 sys 0m2.017s 00:45:48.075 14:06:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:48.075 14:06:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:45:48.075 ************************************ 00:45:48.075 END TEST spdk_target_abort 00:45:48.075 ************************************ 00:45:48.075 14:06:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:45:48.075 14:06:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:45:48.075 14:06:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:48.075 14:06:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:45:48.075 ************************************ 00:45:48.075 START TEST kernel_target_abort 00:45:48.075 ************************************ 00:45:48.075 14:06:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:45:48.075 14:06:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:45:48.075 14:06:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:45:48.075 14:06:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:45:48.075 14:06:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:45:48.075 14:06:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:48.075 14:06:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:48.075 14:06:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:45:48.075 14:06:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:48.075 14:06:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:45:48.075 14:06:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:45:48.075 14:06:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:45:48.075 14:06:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:45:48.075 14:06:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:45:48.075 14:06:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:45:48.075 14:06:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:45:48.075 14:06:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:45:48.075 14:06:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:45:48.075 14:06:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:45:48.075 14:06:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:45:48.075 14:06:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:45:48.075 14:06:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:45:48.075 14:06:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:45:48.643 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:45:48.643 Waiting for block devices as requested 00:45:48.643 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:45:48.643 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:45:48.643 14:06:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:45:48.643 14:06:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:45:48.643 14:06:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:45:48.643 14:06:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:45:48.643 14:06:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:45:48.643 14:06:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:45:48.643 14:06:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:45:48.643 14:06:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:45:48.643 14:06:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:45:48.903 No valid GPT data, bailing 00:45:48.903 14:06:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:45:48.903 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:45:48.903 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:45:48.903 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:45:48.903 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:45:48.903 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:45:48.903 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:45:48.903 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:45:48.903 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:45:48.903 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:45:48.903 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:45:48.903 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:45:48.903 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:45:48.903 No valid GPT data, bailing 00:45:48.903 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:45:48.903 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:45:48.903 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:45:48.903 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:45:48.903 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:45:48.903 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:45:48.903 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:45:48.903 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:45:48.903 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:45:48.903 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:45:48.903 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:45:48.903 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:45:48.903 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:45:48.903 No valid GPT data, bailing 00:45:48.903 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:45:48.903 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:45:48.903 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:45:48.903 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:45:48.903 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:45:48.903 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:45:48.903 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:45:48.903 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:45:48.903 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:45:48.903 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:45:48.903 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:45:48.903 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:45:48.903 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:45:48.903 No valid GPT data, bailing 00:45:48.903 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:45:48.903 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:45:48.903 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:45:48.903 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:45:48.903 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:45:48.903 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:45:48.903 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:45:48.903 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:45:48.903 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:45:48.903 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:45:48.903 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:45:49.164 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:45:49.164 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:45:49.164 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:45:49.164 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:45:49.164 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:45:49.164 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:45:49.164 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 --hostid=105ec898-1662-46bd-85be-b241e399edb9 -a 10.0.0.1 -t tcp -s 4420 00:45:49.164 00:45:49.164 Discovery Log Number of Records 2, Generation counter 2 00:45:49.164 =====Discovery Log Entry 0====== 00:45:49.164 trtype: tcp 00:45:49.164 adrfam: ipv4 00:45:49.164 subtype: current discovery subsystem 00:45:49.164 treq: not specified, sq flow control disable supported 00:45:49.164 portid: 1 00:45:49.164 trsvcid: 4420 00:45:49.164 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:45:49.164 traddr: 10.0.0.1 00:45:49.164 eflags: none 00:45:49.164 sectype: none 00:45:49.164 =====Discovery Log Entry 1====== 00:45:49.164 trtype: tcp 00:45:49.164 adrfam: ipv4 00:45:49.164 subtype: nvme subsystem 00:45:49.164 treq: not specified, sq flow control disable supported 00:45:49.164 portid: 1 00:45:49.164 trsvcid: 4420 00:45:49.164 subnqn: nqn.2016-06.io.spdk:testnqn 00:45:49.164 traddr: 10.0.0.1 00:45:49.164 eflags: none 00:45:49.164 sectype: none 00:45:49.164 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:45:49.164 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:45:49.164 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:45:49.164 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:45:49.164 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:45:49.164 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:45:49.164 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:45:49.164 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:45:49.164 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:45:49.164 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:49.164 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:45:49.164 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:49.164 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:45:49.164 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:49.164 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:45:49.164 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:49.164 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:45:49.164 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:49.164 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:45:49.164 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:45:49.164 14:06:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:45:52.458 Initializing NVMe Controllers 00:45:52.458 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:45:52.458 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:45:52.458 Initialization complete. Launching workers. 00:45:52.458 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 37700, failed: 0 00:45:52.458 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 37700, failed to submit 0 00:45:52.458 success 0, unsuccessful 37700, failed 0 00:45:52.458 14:06:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:45:52.458 14:06:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:45:55.750 Initializing NVMe Controllers 00:45:55.750 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:45:55.750 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:45:55.750 Initialization complete. Launching workers. 00:45:55.750 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 77522, failed: 0 00:45:55.750 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36245, failed to submit 41277 00:45:55.750 success 0, unsuccessful 36245, failed 0 00:45:55.750 14:06:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:45:55.750 14:06:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:45:59.118 Initializing NVMe Controllers 00:45:59.118 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:45:59.118 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:45:59.118 Initialization complete. Launching workers. 00:45:59.118 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 98354, failed: 0 00:45:59.118 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24576, failed to submit 73778 00:45:59.118 success 0, unsuccessful 24576, failed 0 00:45:59.118 14:06:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:45:59.118 14:06:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:45:59.118 14:06:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:45:59.118 14:06:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:45:59.118 14:06:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:45:59.118 14:06:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:45:59.118 14:06:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:45:59.118 14:06:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:45:59.118 14:06:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:45:59.118 14:06:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:45:59.688 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:46:11.977 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:46:11.977 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:46:11.977 00:46:11.977 real 0m22.892s 00:46:11.977 user 0m6.876s 00:46:11.977 sys 0m13.762s 00:46:11.977 14:07:08 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:11.977 14:07:08 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:11.977 ************************************ 00:46:11.977 END TEST kernel_target_abort 00:46:11.977 ************************************ 00:46:11.977 14:07:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:46:11.977 14:07:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:46:11.977 14:07:08 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:46:11.977 14:07:08 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:46:11.977 14:07:08 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:46:11.978 14:07:08 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:46:11.978 14:07:08 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:46:11.978 14:07:08 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:46:11.978 rmmod nvme_tcp 00:46:11.978 rmmod nvme_fabrics 00:46:11.978 rmmod nvme_keyring 00:46:11.978 14:07:08 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:46:11.978 14:07:08 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:46:11.978 14:07:08 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:46:11.978 14:07:08 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 84498 ']' 00:46:11.978 14:07:08 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 84498 00:46:11.978 14:07:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 84498 ']' 00:46:11.978 14:07:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 84498 00:46:11.978 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (84498) - No such process 00:46:11.978 Process with pid 84498 is not found 00:46:11.978 14:07:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 84498 is not found' 00:46:11.978 14:07:08 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:46:11.978 14:07:08 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:46:11.978 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:46:11.978 Waiting for block devices as requested 00:46:11.978 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:46:11.978 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:46:11.978 14:07:09 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:46:11.978 14:07:09 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:46:11.978 14:07:09 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:46:11.978 14:07:09 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:46:11.978 14:07:09 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:46:11.978 14:07:09 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:46:11.978 14:07:09 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:46:11.978 14:07:09 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:46:11.978 14:07:09 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:46:11.978 14:07:09 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:46:11.978 14:07:09 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:46:11.978 14:07:09 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:46:11.978 14:07:09 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:46:11.978 14:07:09 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:46:11.978 14:07:09 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:46:11.978 14:07:09 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:46:11.978 14:07:09 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:46:11.978 14:07:09 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:46:11.978 14:07:09 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:46:11.978 14:07:09 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:46:11.978 14:07:09 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:46:12.242 14:07:09 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:46:12.242 14:07:09 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:12.242 14:07:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:46:12.242 14:07:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:12.242 14:07:09 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:46:12.242 00:46:12.242 real 0m38.756s 00:46:12.242 user 0m55.983s 00:46:12.242 sys 0m17.720s 00:46:12.242 14:07:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:12.242 14:07:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:12.242 ************************************ 00:46:12.242 END TEST nvmf_abort_qd_sizes 00:46:12.242 ************************************ 00:46:12.242 14:07:09 -- spdk/autotest.sh@292 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:46:12.242 14:07:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:46:12.242 14:07:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:12.242 14:07:09 -- common/autotest_common.sh@10 -- # set +x 00:46:12.242 ************************************ 00:46:12.242 START TEST keyring_file 00:46:12.242 ************************************ 00:46:12.242 14:07:09 keyring_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:46:12.243 * Looking for test storage... 00:46:12.243 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:46:12.243 14:07:09 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:46:12.243 14:07:09 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:46:12.243 14:07:09 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:46:12.502 14:07:09 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:46:12.502 14:07:09 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:46:12.502 14:07:09 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:46:12.502 14:07:09 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:46:12.502 14:07:09 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:46:12.502 14:07:09 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:46:12.502 14:07:09 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:46:12.502 14:07:09 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:46:12.502 14:07:09 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:46:12.502 14:07:09 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:46:12.502 14:07:09 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:46:12.502 14:07:09 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:46:12.502 14:07:09 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:46:12.502 14:07:09 keyring_file -- scripts/common.sh@345 -- # : 1 00:46:12.502 14:07:09 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:46:12.502 14:07:09 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:46:12.502 14:07:09 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:46:12.502 14:07:09 keyring_file -- scripts/common.sh@353 -- # local d=1 00:46:12.502 14:07:09 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:46:12.502 14:07:09 keyring_file -- scripts/common.sh@355 -- # echo 1 00:46:12.502 14:07:09 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:46:12.502 14:07:09 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:46:12.502 14:07:09 keyring_file -- scripts/common.sh@353 -- # local d=2 00:46:12.503 14:07:09 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:46:12.503 14:07:09 keyring_file -- scripts/common.sh@355 -- # echo 2 00:46:12.503 14:07:09 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:46:12.503 14:07:09 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:46:12.503 14:07:09 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:46:12.503 14:07:09 keyring_file -- scripts/common.sh@368 -- # return 0 00:46:12.503 14:07:09 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:46:12.503 14:07:09 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:46:12.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:12.503 --rc genhtml_branch_coverage=1 00:46:12.503 --rc genhtml_function_coverage=1 00:46:12.503 --rc genhtml_legend=1 00:46:12.503 --rc geninfo_all_blocks=1 00:46:12.503 --rc geninfo_unexecuted_blocks=1 00:46:12.503 00:46:12.503 ' 00:46:12.503 14:07:09 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:46:12.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:12.503 --rc genhtml_branch_coverage=1 00:46:12.503 --rc genhtml_function_coverage=1 00:46:12.503 --rc genhtml_legend=1 00:46:12.503 --rc geninfo_all_blocks=1 00:46:12.503 --rc geninfo_unexecuted_blocks=1 00:46:12.503 00:46:12.503 ' 00:46:12.503 14:07:09 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:46:12.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:12.503 --rc genhtml_branch_coverage=1 00:46:12.503 --rc genhtml_function_coverage=1 00:46:12.503 --rc genhtml_legend=1 00:46:12.503 --rc geninfo_all_blocks=1 00:46:12.503 --rc geninfo_unexecuted_blocks=1 00:46:12.503 00:46:12.503 ' 00:46:12.503 14:07:09 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:46:12.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:12.503 --rc genhtml_branch_coverage=1 00:46:12.503 --rc genhtml_function_coverage=1 00:46:12.503 --rc genhtml_legend=1 00:46:12.503 --rc geninfo_all_blocks=1 00:46:12.503 --rc geninfo_unexecuted_blocks=1 00:46:12.503 00:46:12.503 ' 00:46:12.503 14:07:09 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:46:12.503 14:07:09 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:46:12.503 14:07:09 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:46:12.503 14:07:09 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:46:12.503 14:07:09 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:46:12.503 14:07:09 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:46:12.503 14:07:09 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:46:12.503 14:07:09 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:46:12.503 14:07:09 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:46:12.503 14:07:09 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:46:12.503 14:07:09 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:46:12.503 14:07:09 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:46:12.503 14:07:09 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:46:12.503 14:07:09 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:46:12.503 14:07:09 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=105ec898-1662-46bd-85be-b241e399edb9 00:46:12.503 14:07:09 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:46:12.503 14:07:09 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:46:12.503 14:07:09 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:46:12.503 14:07:09 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:46:12.503 14:07:09 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:46:12.503 14:07:09 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:46:12.503 14:07:09 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:12.503 14:07:09 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:12.503 14:07:09 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:12.503 14:07:09 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:12.503 14:07:09 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:12.503 14:07:09 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:12.503 14:07:09 keyring_file -- paths/export.sh@5 -- # export PATH 00:46:12.503 14:07:09 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:12.503 14:07:09 keyring_file -- nvmf/common.sh@51 -- # : 0 00:46:12.503 14:07:09 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:46:12.503 14:07:09 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:46:12.503 14:07:09 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:46:12.503 14:07:09 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:46:12.503 14:07:09 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:46:12.503 14:07:09 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:46:12.503 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:46:12.503 14:07:09 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:46:12.503 14:07:09 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:46:12.503 14:07:09 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:46:12.503 14:07:09 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:46:12.503 14:07:09 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:46:12.503 14:07:09 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:46:12.503 14:07:09 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:46:12.503 14:07:09 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:46:12.503 14:07:09 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:46:12.503 14:07:09 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:46:12.503 14:07:09 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:46:12.503 14:07:09 keyring_file -- keyring/common.sh@17 -- # name=key0 00:46:12.503 14:07:09 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:46:12.503 14:07:09 keyring_file -- keyring/common.sh@17 -- # digest=0 00:46:12.503 14:07:09 keyring_file -- keyring/common.sh@18 -- # mktemp 00:46:12.503 14:07:09 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.R0PT98d2zG 00:46:12.503 14:07:09 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:46:12.503 14:07:09 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:46:12.503 14:07:09 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:46:12.503 14:07:09 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:46:12.503 14:07:09 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:46:12.503 14:07:09 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:46:12.503 14:07:09 keyring_file -- nvmf/common.sh@733 -- # python - 00:46:12.503 14:07:09 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.R0PT98d2zG 00:46:12.503 14:07:09 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.R0PT98d2zG 00:46:12.503 14:07:09 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.R0PT98d2zG 00:46:12.503 14:07:09 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:46:12.503 14:07:09 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:46:12.503 14:07:09 keyring_file -- keyring/common.sh@17 -- # name=key1 00:46:12.503 14:07:09 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:46:12.503 14:07:09 keyring_file -- keyring/common.sh@17 -- # digest=0 00:46:12.503 14:07:09 keyring_file -- keyring/common.sh@18 -- # mktemp 00:46:12.503 14:07:09 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.uk5tjnTQlJ 00:46:12.503 14:07:09 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:46:12.503 14:07:09 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:46:12.503 14:07:09 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:46:12.503 14:07:09 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:46:12.503 14:07:09 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:46:12.503 14:07:09 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:46:12.503 14:07:09 keyring_file -- nvmf/common.sh@733 -- # python - 00:46:12.503 14:07:09 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.uk5tjnTQlJ 00:46:12.503 14:07:09 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.uk5tjnTQlJ 00:46:12.503 14:07:09 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.uk5tjnTQlJ 00:46:12.503 14:07:09 keyring_file -- keyring/file.sh@30 -- # tgtpid=85546 00:46:12.503 14:07:09 keyring_file -- keyring/file.sh@32 -- # waitforlisten 85546 00:46:12.503 14:07:09 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85546 ']' 00:46:12.503 14:07:09 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:12.503 14:07:09 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:12.503 14:07:09 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:12.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:12.504 14:07:09 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:12.504 14:07:09 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:46:12.504 14:07:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:46:12.763 [2024-11-20 14:07:09.879454] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:46:12.763 [2024-11-20 14:07:09.879551] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85546 ] 00:46:12.763 [2024-11-20 14:07:10.016917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:12.763 [2024-11-20 14:07:10.075821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:46:13.023 [2024-11-20 14:07:10.152058] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:46:13.283 14:07:10 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:13.283 14:07:10 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:46:13.283 14:07:10 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:46:13.283 14:07:10 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:13.283 14:07:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:46:13.283 [2024-11-20 14:07:10.403601] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:13.283 null0 00:46:13.283 [2024-11-20 14:07:10.435537] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:46:13.283 [2024-11-20 14:07:10.435784] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:46:13.283 14:07:10 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:13.283 14:07:10 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:46:13.283 14:07:10 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:46:13.283 14:07:10 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:46:13.283 14:07:10 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:46:13.283 14:07:10 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:13.283 14:07:10 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:46:13.283 14:07:10 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:13.283 14:07:10 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:46:13.283 14:07:10 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:13.283 14:07:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:46:13.283 [2024-11-20 14:07:10.467462] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:46:13.283 request: 00:46:13.283 { 00:46:13.283 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:46:13.283 "secure_channel": false, 00:46:13.283 "listen_address": { 00:46:13.283 "trtype": "tcp", 00:46:13.283 "traddr": "127.0.0.1", 00:46:13.283 "trsvcid": "4420" 00:46:13.283 }, 00:46:13.283 "method": "nvmf_subsystem_add_listener", 00:46:13.283 "req_id": 1 00:46:13.283 } 00:46:13.283 Got JSON-RPC error response 00:46:13.283 response: 00:46:13.283 { 00:46:13.283 "code": -32602, 00:46:13.283 "message": "Invalid parameters" 00:46:13.283 } 00:46:13.283 14:07:10 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:46:13.283 14:07:10 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:46:13.283 14:07:10 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:46:13.283 14:07:10 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:46:13.283 14:07:10 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:46:13.283 14:07:10 keyring_file -- keyring/file.sh@47 -- # bperfpid=85562 00:46:13.283 14:07:10 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:46:13.283 14:07:10 keyring_file -- keyring/file.sh@49 -- # waitforlisten 85562 /var/tmp/bperf.sock 00:46:13.283 14:07:10 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85562 ']' 00:46:13.283 14:07:10 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:46:13.283 14:07:10 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:13.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:46:13.283 14:07:10 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:46:13.283 14:07:10 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:13.283 14:07:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:46:13.283 [2024-11-20 14:07:10.528205] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:46:13.283 [2024-11-20 14:07:10.528274] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85562 ] 00:46:13.543 [2024-11-20 14:07:10.675490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:13.543 [2024-11-20 14:07:10.736093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:46:13.543 [2024-11-20 14:07:10.808432] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:46:14.113 14:07:11 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:14.113 14:07:11 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:46:14.113 14:07:11 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.R0PT98d2zG 00:46:14.113 14:07:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.R0PT98d2zG 00:46:14.374 14:07:11 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.uk5tjnTQlJ 00:46:14.374 14:07:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.uk5tjnTQlJ 00:46:14.633 14:07:11 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:46:14.633 14:07:11 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:46:14.633 14:07:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:14.633 14:07:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:14.633 14:07:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:14.892 14:07:11 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.R0PT98d2zG == \/\t\m\p\/\t\m\p\.\R\0\P\T\9\8\d\2\z\G ]] 00:46:14.892 14:07:11 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:46:14.892 14:07:11 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:46:14.892 14:07:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:46:14.892 14:07:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:14.892 14:07:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:14.892 14:07:12 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.uk5tjnTQlJ == \/\t\m\p\/\t\m\p\.\u\k\5\t\j\n\T\Q\l\J ]] 00:46:14.892 14:07:12 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:46:14.892 14:07:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:14.892 14:07:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:14.892 14:07:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:14.892 14:07:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:14.892 14:07:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:15.150 14:07:12 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:46:15.150 14:07:12 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:46:15.150 14:07:12 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:46:15.150 14:07:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:15.150 14:07:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:15.150 14:07:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:15.150 14:07:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:46:15.409 14:07:12 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:46:15.409 14:07:12 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:15.409 14:07:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:15.668 [2024-11-20 14:07:12.851186] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:46:15.668 nvme0n1 00:46:15.668 14:07:12 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:46:15.668 14:07:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:15.668 14:07:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:15.668 14:07:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:15.668 14:07:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:15.668 14:07:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:15.928 14:07:13 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:46:15.928 14:07:13 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:46:15.928 14:07:13 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:46:15.928 14:07:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:15.928 14:07:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:15.928 14:07:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:15.928 14:07:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:46:16.187 14:07:13 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:46:16.187 14:07:13 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:46:16.446 Running I/O for 1 seconds... 00:46:17.384 16658.00 IOPS, 65.07 MiB/s 00:46:17.384 Latency(us) 00:46:17.384 [2024-11-20T14:07:14.707Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:17.384 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:46:17.384 nvme0n1 : 1.00 16715.39 65.29 0.00 0.00 7643.87 3219.56 11962.47 00:46:17.384 [2024-11-20T14:07:14.707Z] =================================================================================================================== 00:46:17.384 [2024-11-20T14:07:14.707Z] Total : 16715.39 65.29 0.00 0.00 7643.87 3219.56 11962.47 00:46:17.384 { 00:46:17.384 "results": [ 00:46:17.384 { 00:46:17.384 "job": "nvme0n1", 00:46:17.384 "core_mask": "0x2", 00:46:17.384 "workload": "randrw", 00:46:17.384 "percentage": 50, 00:46:17.384 "status": "finished", 00:46:17.384 "queue_depth": 128, 00:46:17.384 "io_size": 4096, 00:46:17.384 "runtime": 1.004224, 00:46:17.384 "iops": 16715.39417500478, 00:46:17.384 "mibps": 65.29450849611243, 00:46:17.384 "io_failed": 0, 00:46:17.384 "io_timeout": 0, 00:46:17.384 "avg_latency_us": 7643.86526388959, 00:46:17.384 "min_latency_us": 3219.5633187772924, 00:46:17.384 "max_latency_us": 11962.466375545851 00:46:17.384 } 00:46:17.384 ], 00:46:17.384 "core_count": 1 00:46:17.384 } 00:46:17.384 14:07:14 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:46:17.384 14:07:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:46:17.643 14:07:14 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:46:17.643 14:07:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:17.643 14:07:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:17.643 14:07:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:17.643 14:07:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:17.643 14:07:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:17.903 14:07:14 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:46:17.903 14:07:14 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:46:17.903 14:07:14 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:46:17.903 14:07:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:17.903 14:07:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:17.903 14:07:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:17.903 14:07:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:46:17.903 14:07:15 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:46:17.903 14:07:15 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:46:17.903 14:07:15 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:46:17.903 14:07:15 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:46:17.903 14:07:15 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:46:17.903 14:07:15 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:17.903 14:07:15 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:46:17.903 14:07:15 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:17.903 14:07:15 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:46:17.903 14:07:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:46:18.162 [2024-11-20 14:07:15.429235] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:46:18.162 [2024-11-20 14:07:15.429948] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1615c60 (107): Transport endpoint is not connected 00:46:18.162 [2024-11-20 14:07:15.430937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1615c60 (9): Bad file descriptor 00:46:18.162 [2024-11-20 14:07:15.431934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:46:18.162 [2024-11-20 14:07:15.432003] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:46:18.162 [2024-11-20 14:07:15.432011] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:46:18.162 [2024-11-20 14:07:15.432019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:46:18.162 request: 00:46:18.162 { 00:46:18.162 "name": "nvme0", 00:46:18.162 "trtype": "tcp", 00:46:18.162 "traddr": "127.0.0.1", 00:46:18.162 "adrfam": "ipv4", 00:46:18.162 "trsvcid": "4420", 00:46:18.162 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:18.162 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:18.162 "prchk_reftag": false, 00:46:18.162 "prchk_guard": false, 00:46:18.162 "hdgst": false, 00:46:18.162 "ddgst": false, 00:46:18.162 "psk": "key1", 00:46:18.162 "allow_unrecognized_csi": false, 00:46:18.162 "method": "bdev_nvme_attach_controller", 00:46:18.162 "req_id": 1 00:46:18.162 } 00:46:18.162 Got JSON-RPC error response 00:46:18.162 response: 00:46:18.162 { 00:46:18.162 "code": -5, 00:46:18.162 "message": "Input/output error" 00:46:18.162 } 00:46:18.162 14:07:15 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:46:18.162 14:07:15 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:46:18.162 14:07:15 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:46:18.162 14:07:15 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:46:18.162 14:07:15 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:46:18.162 14:07:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:18.162 14:07:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:18.162 14:07:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:18.163 14:07:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:18.163 14:07:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:18.422 14:07:15 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:46:18.422 14:07:15 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:46:18.422 14:07:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:18.422 14:07:15 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:46:18.422 14:07:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:18.422 14:07:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:18.422 14:07:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:46:18.705 14:07:15 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:46:18.705 14:07:15 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:46:18.705 14:07:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:46:18.964 14:07:16 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:46:18.964 14:07:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:46:19.240 14:07:16 keyring_file -- keyring/file.sh@78 -- # jq length 00:46:19.240 14:07:16 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:46:19.240 14:07:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:19.240 14:07:16 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:46:19.240 14:07:16 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.R0PT98d2zG 00:46:19.240 14:07:16 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.R0PT98d2zG 00:46:19.240 14:07:16 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:46:19.240 14:07:16 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.R0PT98d2zG 00:46:19.240 14:07:16 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:46:19.240 14:07:16 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:19.240 14:07:16 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:46:19.240 14:07:16 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:19.240 14:07:16 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.R0PT98d2zG 00:46:19.240 14:07:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.R0PT98d2zG 00:46:19.521 [2024-11-20 14:07:16.739850] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.R0PT98d2zG': 0100660 00:46:19.521 [2024-11-20 14:07:16.739895] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:46:19.521 request: 00:46:19.521 { 00:46:19.521 "name": "key0", 00:46:19.521 "path": "/tmp/tmp.R0PT98d2zG", 00:46:19.521 "method": "keyring_file_add_key", 00:46:19.521 "req_id": 1 00:46:19.521 } 00:46:19.521 Got JSON-RPC error response 00:46:19.521 response: 00:46:19.521 { 00:46:19.521 "code": -1, 00:46:19.521 "message": "Operation not permitted" 00:46:19.521 } 00:46:19.521 14:07:16 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:46:19.521 14:07:16 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:46:19.521 14:07:16 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:46:19.521 14:07:16 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:46:19.521 14:07:16 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.R0PT98d2zG 00:46:19.521 14:07:16 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.R0PT98d2zG 00:46:19.521 14:07:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.R0PT98d2zG 00:46:19.780 14:07:16 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.R0PT98d2zG 00:46:19.780 14:07:16 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:46:19.780 14:07:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:19.780 14:07:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:19.780 14:07:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:19.780 14:07:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:19.780 14:07:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:20.041 14:07:17 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:46:20.041 14:07:17 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:20.041 14:07:17 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:46:20.041 14:07:17 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:20.041 14:07:17 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:46:20.041 14:07:17 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:20.041 14:07:17 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:46:20.041 14:07:17 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:20.041 14:07:17 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:20.041 14:07:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:20.300 [2024-11-20 14:07:17.394753] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.R0PT98d2zG': No such file or directory 00:46:20.300 [2024-11-20 14:07:17.394869] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:46:20.300 [2024-11-20 14:07:17.394912] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:46:20.300 [2024-11-20 14:07:17.394929] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:46:20.300 [2024-11-20 14:07:17.394956] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:46:20.300 [2024-11-20 14:07:17.394972] bdev_nvme.c:6764:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:46:20.300 request: 00:46:20.300 { 00:46:20.300 "name": "nvme0", 00:46:20.300 "trtype": "tcp", 00:46:20.300 "traddr": "127.0.0.1", 00:46:20.300 "adrfam": "ipv4", 00:46:20.300 "trsvcid": "4420", 00:46:20.300 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:20.300 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:20.300 "prchk_reftag": false, 00:46:20.300 "prchk_guard": false, 00:46:20.300 "hdgst": false, 00:46:20.300 "ddgst": false, 00:46:20.300 "psk": "key0", 00:46:20.300 "allow_unrecognized_csi": false, 00:46:20.300 "method": "bdev_nvme_attach_controller", 00:46:20.300 "req_id": 1 00:46:20.300 } 00:46:20.300 Got JSON-RPC error response 00:46:20.300 response: 00:46:20.300 { 00:46:20.300 "code": -19, 00:46:20.300 "message": "No such device" 00:46:20.300 } 00:46:20.300 14:07:17 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:46:20.300 14:07:17 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:46:20.300 14:07:17 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:46:20.300 14:07:17 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:46:20.300 14:07:17 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:46:20.300 14:07:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:46:20.560 14:07:17 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:46:20.560 14:07:17 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:46:20.560 14:07:17 keyring_file -- keyring/common.sh@17 -- # name=key0 00:46:20.560 14:07:17 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:46:20.560 14:07:17 keyring_file -- keyring/common.sh@17 -- # digest=0 00:46:20.560 14:07:17 keyring_file -- keyring/common.sh@18 -- # mktemp 00:46:20.560 14:07:17 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.kkzlYp6QaX 00:46:20.560 14:07:17 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:46:20.560 14:07:17 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:46:20.560 14:07:17 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:46:20.560 14:07:17 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:46:20.560 14:07:17 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:46:20.560 14:07:17 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:46:20.560 14:07:17 keyring_file -- nvmf/common.sh@733 -- # python - 00:46:20.560 14:07:17 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.kkzlYp6QaX 00:46:20.560 14:07:17 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.kkzlYp6QaX 00:46:20.560 14:07:17 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.kkzlYp6QaX 00:46:20.560 14:07:17 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.kkzlYp6QaX 00:46:20.560 14:07:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.kkzlYp6QaX 00:46:20.560 14:07:17 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:20.560 14:07:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:21.128 nvme0n1 00:46:21.128 14:07:18 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:46:21.128 14:07:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:21.128 14:07:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:21.128 14:07:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:21.128 14:07:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:21.128 14:07:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:21.128 14:07:18 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:46:21.128 14:07:18 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:46:21.128 14:07:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:46:21.387 14:07:18 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:46:21.388 14:07:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:21.388 14:07:18 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:46:21.388 14:07:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:21.388 14:07:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:21.647 14:07:18 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:46:21.647 14:07:18 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:46:21.647 14:07:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:21.647 14:07:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:21.647 14:07:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:21.647 14:07:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:21.647 14:07:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:21.907 14:07:19 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:46:21.907 14:07:19 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:46:21.907 14:07:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:46:22.166 14:07:19 keyring_file -- keyring/file.sh@105 -- # jq length 00:46:22.166 14:07:19 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:46:22.166 14:07:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:22.425 14:07:19 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:46:22.425 14:07:19 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.kkzlYp6QaX 00:46:22.425 14:07:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.kkzlYp6QaX 00:46:22.685 14:07:19 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.uk5tjnTQlJ 00:46:22.685 14:07:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.uk5tjnTQlJ 00:46:22.944 14:07:20 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:22.944 14:07:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:23.202 nvme0n1 00:46:23.202 14:07:20 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:46:23.202 14:07:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:46:23.460 14:07:20 keyring_file -- keyring/file.sh@113 -- # config='{ 00:46:23.460 "subsystems": [ 00:46:23.460 { 00:46:23.460 "subsystem": "keyring", 00:46:23.460 "config": [ 00:46:23.460 { 00:46:23.460 "method": "keyring_file_add_key", 00:46:23.460 "params": { 00:46:23.460 "name": "key0", 00:46:23.460 "path": "/tmp/tmp.kkzlYp6QaX" 00:46:23.460 } 00:46:23.460 }, 00:46:23.460 { 00:46:23.460 "method": "keyring_file_add_key", 00:46:23.460 "params": { 00:46:23.460 "name": "key1", 00:46:23.460 "path": "/tmp/tmp.uk5tjnTQlJ" 00:46:23.460 } 00:46:23.460 } 00:46:23.460 ] 00:46:23.460 }, 00:46:23.460 { 00:46:23.460 "subsystem": "iobuf", 00:46:23.460 "config": [ 00:46:23.460 { 00:46:23.460 "method": "iobuf_set_options", 00:46:23.460 "params": { 00:46:23.460 "small_pool_count": 8192, 00:46:23.460 "large_pool_count": 1024, 00:46:23.460 "small_bufsize": 8192, 00:46:23.460 "large_bufsize": 135168, 00:46:23.460 "enable_numa": false 00:46:23.460 } 00:46:23.460 } 00:46:23.460 ] 00:46:23.460 }, 00:46:23.460 { 00:46:23.460 "subsystem": "sock", 00:46:23.460 "config": [ 00:46:23.460 { 00:46:23.460 "method": "sock_set_default_impl", 00:46:23.460 "params": { 00:46:23.460 "impl_name": "uring" 00:46:23.460 } 00:46:23.460 }, 00:46:23.460 { 00:46:23.460 "method": "sock_impl_set_options", 00:46:23.460 "params": { 00:46:23.460 "impl_name": "ssl", 00:46:23.460 "recv_buf_size": 4096, 00:46:23.460 "send_buf_size": 4096, 00:46:23.460 "enable_recv_pipe": true, 00:46:23.460 "enable_quickack": false, 00:46:23.460 "enable_placement_id": 0, 00:46:23.460 "enable_zerocopy_send_server": true, 00:46:23.460 "enable_zerocopy_send_client": false, 00:46:23.460 "zerocopy_threshold": 0, 00:46:23.460 "tls_version": 0, 00:46:23.460 "enable_ktls": false 00:46:23.460 } 00:46:23.460 }, 00:46:23.460 { 00:46:23.460 "method": "sock_impl_set_options", 00:46:23.460 "params": { 00:46:23.460 "impl_name": "posix", 00:46:23.460 "recv_buf_size": 2097152, 00:46:23.460 "send_buf_size": 2097152, 00:46:23.460 "enable_recv_pipe": true, 00:46:23.460 "enable_quickack": false, 00:46:23.460 "enable_placement_id": 0, 00:46:23.460 "enable_zerocopy_send_server": true, 00:46:23.460 "enable_zerocopy_send_client": false, 00:46:23.460 "zerocopy_threshold": 0, 00:46:23.460 "tls_version": 0, 00:46:23.460 "enable_ktls": false 00:46:23.460 } 00:46:23.460 }, 00:46:23.460 { 00:46:23.460 "method": "sock_impl_set_options", 00:46:23.460 "params": { 00:46:23.460 "impl_name": "uring", 00:46:23.460 "recv_buf_size": 2097152, 00:46:23.460 "send_buf_size": 2097152, 00:46:23.461 "enable_recv_pipe": true, 00:46:23.461 "enable_quickack": false, 00:46:23.461 "enable_placement_id": 0, 00:46:23.461 "enable_zerocopy_send_server": false, 00:46:23.461 "enable_zerocopy_send_client": false, 00:46:23.461 "zerocopy_threshold": 0, 00:46:23.461 "tls_version": 0, 00:46:23.461 "enable_ktls": false 00:46:23.461 } 00:46:23.461 } 00:46:23.461 ] 00:46:23.461 }, 00:46:23.461 { 00:46:23.461 "subsystem": "vmd", 00:46:23.461 "config": [] 00:46:23.461 }, 00:46:23.461 { 00:46:23.461 "subsystem": "accel", 00:46:23.461 "config": [ 00:46:23.461 { 00:46:23.461 "method": "accel_set_options", 00:46:23.461 "params": { 00:46:23.461 "small_cache_size": 128, 00:46:23.461 "large_cache_size": 16, 00:46:23.461 "task_count": 2048, 00:46:23.461 "sequence_count": 2048, 00:46:23.461 "buf_count": 2048 00:46:23.461 } 00:46:23.461 } 00:46:23.461 ] 00:46:23.461 }, 00:46:23.461 { 00:46:23.461 "subsystem": "bdev", 00:46:23.461 "config": [ 00:46:23.461 { 00:46:23.461 "method": "bdev_set_options", 00:46:23.461 "params": { 00:46:23.461 "bdev_io_pool_size": 65535, 00:46:23.461 "bdev_io_cache_size": 256, 00:46:23.461 "bdev_auto_examine": true, 00:46:23.461 "iobuf_small_cache_size": 128, 00:46:23.461 "iobuf_large_cache_size": 16 00:46:23.461 } 00:46:23.461 }, 00:46:23.461 { 00:46:23.461 "method": "bdev_raid_set_options", 00:46:23.461 "params": { 00:46:23.461 "process_window_size_kb": 1024, 00:46:23.461 "process_max_bandwidth_mb_sec": 0 00:46:23.461 } 00:46:23.461 }, 00:46:23.461 { 00:46:23.461 "method": "bdev_iscsi_set_options", 00:46:23.461 "params": { 00:46:23.461 "timeout_sec": 30 00:46:23.461 } 00:46:23.461 }, 00:46:23.461 { 00:46:23.461 "method": "bdev_nvme_set_options", 00:46:23.461 "params": { 00:46:23.461 "action_on_timeout": "none", 00:46:23.461 "timeout_us": 0, 00:46:23.461 "timeout_admin_us": 0, 00:46:23.461 "keep_alive_timeout_ms": 10000, 00:46:23.461 "arbitration_burst": 0, 00:46:23.461 "low_priority_weight": 0, 00:46:23.461 "medium_priority_weight": 0, 00:46:23.461 "high_priority_weight": 0, 00:46:23.461 "nvme_adminq_poll_period_us": 10000, 00:46:23.461 "nvme_ioq_poll_period_us": 0, 00:46:23.461 "io_queue_requests": 512, 00:46:23.461 "delay_cmd_submit": true, 00:46:23.461 "transport_retry_count": 4, 00:46:23.461 "bdev_retry_count": 3, 00:46:23.461 "transport_ack_timeout": 0, 00:46:23.461 "ctrlr_loss_timeout_sec": 0, 00:46:23.461 "reconnect_delay_sec": 0, 00:46:23.461 "fast_io_fail_timeout_sec": 0, 00:46:23.461 "disable_auto_failback": false, 00:46:23.461 "generate_uuids": false, 00:46:23.461 "transport_tos": 0, 00:46:23.461 "nvme_error_stat": false, 00:46:23.461 "rdma_srq_size": 0, 00:46:23.461 "io_path_stat": false, 00:46:23.461 "allow_accel_sequence": false, 00:46:23.461 "rdma_max_cq_size": 0, 00:46:23.461 "rdma_cm_event_timeout_ms": 0, 00:46:23.461 "dhchap_digests": [ 00:46:23.461 "sha256", 00:46:23.461 "sha384", 00:46:23.461 "sha512" 00:46:23.461 ], 00:46:23.461 "dhchap_dhgroups": [ 00:46:23.461 "null", 00:46:23.461 "ffdhe2048", 00:46:23.461 "ffdhe3072", 00:46:23.461 "ffdhe4096", 00:46:23.461 "ffdhe6144", 00:46:23.461 "ffdhe8192" 00:46:23.461 ] 00:46:23.461 } 00:46:23.461 }, 00:46:23.461 { 00:46:23.461 "method": "bdev_nvme_attach_controller", 00:46:23.461 "params": { 00:46:23.461 "name": "nvme0", 00:46:23.461 "trtype": "TCP", 00:46:23.461 "adrfam": "IPv4", 00:46:23.461 "traddr": "127.0.0.1", 00:46:23.461 "trsvcid": "4420", 00:46:23.461 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:23.461 "prchk_reftag": false, 00:46:23.461 "prchk_guard": false, 00:46:23.461 "ctrlr_loss_timeout_sec": 0, 00:46:23.461 "reconnect_delay_sec": 0, 00:46:23.461 "fast_io_fail_timeout_sec": 0, 00:46:23.461 "psk": "key0", 00:46:23.461 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:23.461 "hdgst": false, 00:46:23.461 "ddgst": false, 00:46:23.461 "multipath": "multipath" 00:46:23.461 } 00:46:23.461 }, 00:46:23.461 { 00:46:23.461 "method": "bdev_nvme_set_hotplug", 00:46:23.461 "params": { 00:46:23.461 "period_us": 100000, 00:46:23.461 "enable": false 00:46:23.461 } 00:46:23.461 }, 00:46:23.461 { 00:46:23.461 "method": "bdev_wait_for_examine" 00:46:23.461 } 00:46:23.461 ] 00:46:23.461 }, 00:46:23.461 { 00:46:23.461 "subsystem": "nbd", 00:46:23.461 "config": [] 00:46:23.461 } 00:46:23.461 ] 00:46:23.461 }' 00:46:23.461 14:07:20 keyring_file -- keyring/file.sh@115 -- # killprocess 85562 00:46:23.461 14:07:20 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85562 ']' 00:46:23.461 14:07:20 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85562 00:46:23.461 14:07:20 keyring_file -- common/autotest_common.sh@959 -- # uname 00:46:23.461 14:07:20 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:23.461 14:07:20 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85562 00:46:23.461 14:07:20 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:46:23.461 14:07:20 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:46:23.461 14:07:20 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85562' 00:46:23.461 killing process with pid 85562 00:46:23.461 14:07:20 keyring_file -- common/autotest_common.sh@973 -- # kill 85562 00:46:23.461 Received shutdown signal, test time was about 1.000000 seconds 00:46:23.461 00:46:23.461 Latency(us) 00:46:23.461 [2024-11-20T14:07:20.784Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:23.461 [2024-11-20T14:07:20.784Z] =================================================================================================================== 00:46:23.461 [2024-11-20T14:07:20.784Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:46:23.461 14:07:20 keyring_file -- common/autotest_common.sh@978 -- # wait 85562 00:46:23.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:46:23.720 14:07:20 keyring_file -- keyring/file.sh@118 -- # bperfpid=85797 00:46:23.720 14:07:20 keyring_file -- keyring/file.sh@120 -- # waitforlisten 85797 /var/tmp/bperf.sock 00:46:23.720 14:07:20 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85797 ']' 00:46:23.720 14:07:20 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:46:23.720 14:07:20 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:23.720 14:07:20 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:46:23.720 14:07:20 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:23.720 14:07:20 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:46:23.720 14:07:20 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:46:23.720 14:07:20 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:46:23.720 "subsystems": [ 00:46:23.720 { 00:46:23.720 "subsystem": "keyring", 00:46:23.720 "config": [ 00:46:23.720 { 00:46:23.720 "method": "keyring_file_add_key", 00:46:23.720 "params": { 00:46:23.720 "name": "key0", 00:46:23.720 "path": "/tmp/tmp.kkzlYp6QaX" 00:46:23.720 } 00:46:23.720 }, 00:46:23.720 { 00:46:23.720 "method": "keyring_file_add_key", 00:46:23.720 "params": { 00:46:23.720 "name": "key1", 00:46:23.720 "path": "/tmp/tmp.uk5tjnTQlJ" 00:46:23.720 } 00:46:23.720 } 00:46:23.720 ] 00:46:23.720 }, 00:46:23.720 { 00:46:23.720 "subsystem": "iobuf", 00:46:23.720 "config": [ 00:46:23.720 { 00:46:23.720 "method": "iobuf_set_options", 00:46:23.720 "params": { 00:46:23.720 "small_pool_count": 8192, 00:46:23.720 "large_pool_count": 1024, 00:46:23.720 "small_bufsize": 8192, 00:46:23.720 "large_bufsize": 135168, 00:46:23.720 "enable_numa": false 00:46:23.720 } 00:46:23.720 } 00:46:23.720 ] 00:46:23.720 }, 00:46:23.720 { 00:46:23.720 "subsystem": "sock", 00:46:23.720 "config": [ 00:46:23.720 { 00:46:23.720 "method": "sock_set_default_impl", 00:46:23.720 "params": { 00:46:23.720 "impl_name": "uring" 00:46:23.720 } 00:46:23.720 }, 00:46:23.720 { 00:46:23.720 "method": "sock_impl_set_options", 00:46:23.720 "params": { 00:46:23.720 "impl_name": "ssl", 00:46:23.720 "recv_buf_size": 4096, 00:46:23.720 "send_buf_size": 4096, 00:46:23.720 "enable_recv_pipe": true, 00:46:23.720 "enable_quickack": false, 00:46:23.720 "enable_placement_id": 0, 00:46:23.720 "enable_zerocopy_send_server": true, 00:46:23.720 "enable_zerocopy_send_client": false, 00:46:23.720 "zerocopy_threshold": 0, 00:46:23.721 "tls_version": 0, 00:46:23.721 "enable_ktls": false 00:46:23.721 } 00:46:23.721 }, 00:46:23.721 { 00:46:23.721 "method": "sock_impl_set_options", 00:46:23.721 "params": { 00:46:23.721 "impl_name": "posix", 00:46:23.721 "recv_buf_size": 2097152, 00:46:23.721 "send_buf_size": 2097152, 00:46:23.721 "enable_recv_pipe": true, 00:46:23.721 "enable_quickack": false, 00:46:23.721 "enable_placement_id": 0, 00:46:23.721 "enable_zerocopy_send_server": true, 00:46:23.721 "enable_zerocopy_send_client": false, 00:46:23.721 "zerocopy_threshold": 0, 00:46:23.721 "tls_version": 0, 00:46:23.721 "enable_ktls": false 00:46:23.721 } 00:46:23.721 }, 00:46:23.721 { 00:46:23.721 "method": "sock_impl_set_options", 00:46:23.721 "params": { 00:46:23.721 "impl_name": "uring", 00:46:23.721 "recv_buf_size": 2097152, 00:46:23.721 "send_buf_size": 2097152, 00:46:23.721 "enable_recv_pipe": true, 00:46:23.721 "enable_quickack": false, 00:46:23.721 "enable_placement_id": 0, 00:46:23.721 "enable_zerocopy_send_server": false, 00:46:23.721 "enable_zerocopy_send_client": false, 00:46:23.721 "zerocopy_threshold": 0, 00:46:23.721 "tls_version": 0, 00:46:23.721 "enable_ktls": false 00:46:23.721 } 00:46:23.721 } 00:46:23.721 ] 00:46:23.721 }, 00:46:23.721 { 00:46:23.721 "subsystem": "vmd", 00:46:23.721 "config": [] 00:46:23.721 }, 00:46:23.721 { 00:46:23.721 "subsystem": "accel", 00:46:23.721 "config": [ 00:46:23.721 { 00:46:23.721 "method": "accel_set_options", 00:46:23.721 "params": { 00:46:23.721 "small_cache_size": 128, 00:46:23.721 "large_cache_size": 16, 00:46:23.721 "task_count": 2048, 00:46:23.721 "sequence_count": 2048, 00:46:23.721 "buf_count": 2048 00:46:23.721 } 00:46:23.721 } 00:46:23.721 ] 00:46:23.721 }, 00:46:23.721 { 00:46:23.721 "subsystem": "bdev", 00:46:23.721 "config": [ 00:46:23.721 { 00:46:23.721 "method": "bdev_set_options", 00:46:23.721 "params": { 00:46:23.721 "bdev_io_pool_size": 65535, 00:46:23.721 "bdev_io_cache_size": 256, 00:46:23.721 "bdev_auto_examine": true, 00:46:23.721 "iobuf_small_cache_size": 128, 00:46:23.721 "iobuf_large_cache_size": 16 00:46:23.721 } 00:46:23.721 }, 00:46:23.721 { 00:46:23.721 "method": "bdev_raid_set_options", 00:46:23.721 "params": { 00:46:23.721 "process_window_size_kb": 1024, 00:46:23.721 "process_max_bandwidth_mb_sec": 0 00:46:23.721 } 00:46:23.721 }, 00:46:23.721 { 00:46:23.721 "method": "bdev_iscsi_set_options", 00:46:23.721 "params": { 00:46:23.721 "timeout_sec": 30 00:46:23.721 } 00:46:23.721 }, 00:46:23.721 { 00:46:23.721 "method": "bdev_nvme_set_options", 00:46:23.721 "params": { 00:46:23.721 "action_on_timeout": "none", 00:46:23.721 "timeout_us": 0, 00:46:23.721 "timeout_admin_us": 0, 00:46:23.721 "keep_alive_timeout_ms": 10000, 00:46:23.721 "arbitration_burst": 0, 00:46:23.721 "low_priority_weight": 0, 00:46:23.721 "medium_priority_weight": 0, 00:46:23.721 "high_priority_weight": 0, 00:46:23.721 "nvme_adminq_poll_period_us": 10000, 00:46:23.721 "nvme_ioq_poll_period_us": 0, 00:46:23.721 "io_queue_requests": 512, 00:46:23.721 "delay_cmd_submit": true, 00:46:23.721 "transport_retry_count": 4, 00:46:23.721 "bdev_retry_count": 3, 00:46:23.721 "transport_ack_timeout": 0, 00:46:23.721 "ctrlr_loss_timeout_sec": 0, 00:46:23.721 "reconnect_delay_sec": 0, 00:46:23.721 "fast_io_fail_timeout_sec": 0, 00:46:23.721 "disable_auto_failback": false, 00:46:23.721 "generate_uuids": false, 00:46:23.721 "transport_tos": 0, 00:46:23.721 "nvme_error_stat": false, 00:46:23.721 "rdma_srq_size": 0, 00:46:23.721 "io_path_stat": false, 00:46:23.721 "allow_accel_sequence": false, 00:46:23.721 "rdma_max_cq_size": 0, 00:46:23.721 "rdma_cm_event_timeout_ms": 0, 00:46:23.721 "dhchap_digests": [ 00:46:23.721 "sha256", 00:46:23.721 "sha384", 00:46:23.721 "sha512" 00:46:23.721 ], 00:46:23.721 "dhchap_dhgroups": [ 00:46:23.721 "null", 00:46:23.721 "ffdhe2048", 00:46:23.721 "ffdhe3072", 00:46:23.721 "ffdhe4096", 00:46:23.721 "ffdhe6144", 00:46:23.721 "ffdhe8192" 00:46:23.721 ] 00:46:23.721 } 00:46:23.721 }, 00:46:23.721 { 00:46:23.721 "method": "bdev_nvme_attach_controller", 00:46:23.721 "params": { 00:46:23.721 "name": "nvme0", 00:46:23.721 "trtype": "TCP", 00:46:23.721 "adrfam": "IPv4", 00:46:23.721 "traddr": "127.0.0.1", 00:46:23.721 "trsvcid": "4420", 00:46:23.721 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:23.721 "prchk_reftag": false, 00:46:23.721 "prchk_guard": false, 00:46:23.721 "ctrlr_loss_timeout_sec": 0, 00:46:23.721 "reconnect_delay_sec": 0, 00:46:23.721 "fast_io_fail_timeout_sec": 0, 00:46:23.721 "psk": "key0", 00:46:23.721 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:23.721 "hdgst": false, 00:46:23.721 "ddgst": false, 00:46:23.721 "multipath": "multipath" 00:46:23.721 } 00:46:23.721 }, 00:46:23.721 { 00:46:23.721 "method": "bdev_nvme_set_hotplug", 00:46:23.721 "params": { 00:46:23.721 "period_us": 100000, 00:46:23.721 "enable": false 00:46:23.721 } 00:46:23.721 }, 00:46:23.721 { 00:46:23.721 "method": "bdev_wait_for_examine" 00:46:23.721 } 00:46:23.721 ] 00:46:23.721 }, 00:46:23.721 { 00:46:23.721 "subsystem": "nbd", 00:46:23.721 "config": [] 00:46:23.721 } 00:46:23.721 ] 00:46:23.721 }' 00:46:23.721 [2024-11-20 14:07:21.010986] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:46:23.721 [2024-11-20 14:07:21.011047] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85797 ] 00:46:23.979 [2024-11-20 14:07:21.155895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:23.979 [2024-11-20 14:07:21.215206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:46:24.238 [2024-11-20 14:07:21.368073] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:46:24.238 [2024-11-20 14:07:21.439204] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:46:24.804 14:07:21 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:24.804 14:07:21 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:46:24.804 14:07:21 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:46:24.804 14:07:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:24.804 14:07:21 keyring_file -- keyring/file.sh@121 -- # jq length 00:46:24.804 14:07:22 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:46:24.804 14:07:22 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:46:24.804 14:07:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:24.804 14:07:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:24.804 14:07:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:24.804 14:07:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:24.804 14:07:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:25.062 14:07:22 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:46:25.062 14:07:22 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:46:25.062 14:07:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:25.062 14:07:22 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:46:25.062 14:07:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:25.062 14:07:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:25.062 14:07:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:46:25.320 14:07:22 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:46:25.320 14:07:22 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:46:25.320 14:07:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:46:25.320 14:07:22 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:46:25.614 14:07:22 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:46:25.614 14:07:22 keyring_file -- keyring/file.sh@1 -- # cleanup 00:46:25.614 14:07:22 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.kkzlYp6QaX /tmp/tmp.uk5tjnTQlJ 00:46:25.614 14:07:22 keyring_file -- keyring/file.sh@20 -- # killprocess 85797 00:46:25.614 14:07:22 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85797 ']' 00:46:25.614 14:07:22 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85797 00:46:25.614 14:07:22 keyring_file -- common/autotest_common.sh@959 -- # uname 00:46:25.614 14:07:22 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:25.614 14:07:22 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85797 00:46:25.614 killing process with pid 85797 00:46:25.614 Received shutdown signal, test time was about 1.000000 seconds 00:46:25.614 00:46:25.614 Latency(us) 00:46:25.614 [2024-11-20T14:07:22.937Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:25.614 [2024-11-20T14:07:22.938Z] =================================================================================================================== 00:46:25.615 [2024-11-20T14:07:22.938Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:46:25.615 14:07:22 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:46:25.615 14:07:22 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:46:25.615 14:07:22 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85797' 00:46:25.615 14:07:22 keyring_file -- common/autotest_common.sh@973 -- # kill 85797 00:46:25.615 14:07:22 keyring_file -- common/autotest_common.sh@978 -- # wait 85797 00:46:25.874 14:07:23 keyring_file -- keyring/file.sh@21 -- # killprocess 85546 00:46:25.874 14:07:23 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85546 ']' 00:46:25.874 14:07:23 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85546 00:46:25.874 14:07:23 keyring_file -- common/autotest_common.sh@959 -- # uname 00:46:25.874 14:07:23 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:25.874 14:07:23 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85546 00:46:25.874 14:07:23 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:46:25.874 14:07:23 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:46:25.874 killing process with pid 85546 00:46:25.874 14:07:23 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85546' 00:46:25.874 14:07:23 keyring_file -- common/autotest_common.sh@973 -- # kill 85546 00:46:25.874 14:07:23 keyring_file -- common/autotest_common.sh@978 -- # wait 85546 00:46:26.443 ************************************ 00:46:26.443 END TEST keyring_file 00:46:26.443 ************************************ 00:46:26.443 00:46:26.443 real 0m14.151s 00:46:26.443 user 0m34.332s 00:46:26.443 sys 0m3.245s 00:46:26.443 14:07:23 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:26.443 14:07:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:46:26.443 14:07:23 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:46:26.443 14:07:23 -- spdk/autotest.sh@294 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:46:26.443 14:07:23 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:46:26.443 14:07:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:26.443 14:07:23 -- common/autotest_common.sh@10 -- # set +x 00:46:26.443 ************************************ 00:46:26.443 START TEST keyring_linux 00:46:26.443 ************************************ 00:46:26.443 14:07:23 keyring_linux -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:46:26.443 Joined session keyring: 732311537 00:46:26.443 * Looking for test storage... 00:46:26.443 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:46:26.443 14:07:23 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:46:26.443 14:07:23 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:46:26.443 14:07:23 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:46:26.704 14:07:23 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:46:26.704 14:07:23 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:46:26.704 14:07:23 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:46:26.704 14:07:23 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:46:26.704 14:07:23 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:46:26.704 14:07:23 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:46:26.704 14:07:23 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:46:26.704 14:07:23 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:46:26.704 14:07:23 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:46:26.704 14:07:23 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:46:26.704 14:07:23 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:46:26.704 14:07:23 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:46:26.704 14:07:23 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:46:26.704 14:07:23 keyring_linux -- scripts/common.sh@345 -- # : 1 00:46:26.704 14:07:23 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:46:26.704 14:07:23 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:46:26.704 14:07:23 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:46:26.704 14:07:23 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:46:26.704 14:07:23 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:46:26.704 14:07:23 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:46:26.704 14:07:23 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:46:26.704 14:07:23 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:46:26.704 14:07:23 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:46:26.704 14:07:23 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:46:26.704 14:07:23 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:46:26.704 14:07:23 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:46:26.704 14:07:23 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:46:26.704 14:07:23 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:46:26.704 14:07:23 keyring_linux -- scripts/common.sh@368 -- # return 0 00:46:26.704 14:07:23 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:46:26.704 14:07:23 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:46:26.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:26.704 --rc genhtml_branch_coverage=1 00:46:26.704 --rc genhtml_function_coverage=1 00:46:26.704 --rc genhtml_legend=1 00:46:26.704 --rc geninfo_all_blocks=1 00:46:26.704 --rc geninfo_unexecuted_blocks=1 00:46:26.704 00:46:26.704 ' 00:46:26.704 14:07:23 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:46:26.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:26.704 --rc genhtml_branch_coverage=1 00:46:26.704 --rc genhtml_function_coverage=1 00:46:26.704 --rc genhtml_legend=1 00:46:26.704 --rc geninfo_all_blocks=1 00:46:26.704 --rc geninfo_unexecuted_blocks=1 00:46:26.704 00:46:26.704 ' 00:46:26.704 14:07:23 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:46:26.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:26.704 --rc genhtml_branch_coverage=1 00:46:26.704 --rc genhtml_function_coverage=1 00:46:26.704 --rc genhtml_legend=1 00:46:26.704 --rc geninfo_all_blocks=1 00:46:26.704 --rc geninfo_unexecuted_blocks=1 00:46:26.704 00:46:26.704 ' 00:46:26.704 14:07:23 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:46:26.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:26.704 --rc genhtml_branch_coverage=1 00:46:26.704 --rc genhtml_function_coverage=1 00:46:26.704 --rc genhtml_legend=1 00:46:26.704 --rc geninfo_all_blocks=1 00:46:26.704 --rc geninfo_unexecuted_blocks=1 00:46:26.704 00:46:26.704 ' 00:46:26.704 14:07:23 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:46:26.704 14:07:23 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:46:26.704 14:07:23 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:46:26.704 14:07:23 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:46:26.704 14:07:23 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:46:26.704 14:07:23 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:46:26.704 14:07:23 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:46:26.704 14:07:23 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:46:26.704 14:07:23 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:46:26.704 14:07:23 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:46:26.704 14:07:23 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:46:26.704 14:07:23 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:46:26.704 14:07:23 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:46:26.704 14:07:23 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:105ec898-1662-46bd-85be-b241e399edb9 00:46:26.704 14:07:23 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=105ec898-1662-46bd-85be-b241e399edb9 00:46:26.704 14:07:23 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:46:26.704 14:07:23 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:46:26.704 14:07:23 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:46:26.705 14:07:23 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:46:26.705 14:07:23 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:46:26.705 14:07:23 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:46:26.705 14:07:23 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:26.705 14:07:23 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:26.705 14:07:23 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:26.705 14:07:23 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:26.705 14:07:23 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:26.705 14:07:23 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:26.705 14:07:23 keyring_linux -- paths/export.sh@5 -- # export PATH 00:46:26.705 14:07:23 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:26.705 14:07:23 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:46:26.705 14:07:23 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:46:26.705 14:07:23 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:46:26.705 14:07:23 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:46:26.705 14:07:23 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:46:26.705 14:07:23 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:46:26.705 14:07:23 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:46:26.705 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:46:26.705 14:07:23 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:46:26.705 14:07:23 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:46:26.705 14:07:23 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:46:26.705 14:07:23 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:46:26.705 14:07:23 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:46:26.705 14:07:23 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:46:26.705 14:07:23 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:46:26.705 14:07:23 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:46:26.705 14:07:23 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:46:26.705 14:07:23 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:46:26.705 14:07:23 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:46:26.705 14:07:23 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:46:26.705 14:07:23 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:46:26.705 14:07:23 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:46:26.705 14:07:23 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:46:26.705 14:07:23 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:46:26.705 14:07:23 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:46:26.705 14:07:23 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:46:26.705 14:07:23 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:46:26.705 14:07:23 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:46:26.705 14:07:23 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:46:26.705 14:07:23 keyring_linux -- nvmf/common.sh@733 -- # python - 00:46:26.705 14:07:23 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:46:26.705 14:07:23 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:46:26.705 /tmp/:spdk-test:key0 00:46:26.705 14:07:23 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:46:26.705 14:07:23 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:46:26.705 14:07:23 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:46:26.705 14:07:23 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:46:26.705 14:07:23 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:46:26.705 14:07:23 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:46:26.705 14:07:23 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:46:26.705 14:07:23 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:46:26.705 14:07:23 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:46:26.705 14:07:23 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:46:26.705 14:07:23 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:46:26.705 14:07:23 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:46:26.705 14:07:23 keyring_linux -- nvmf/common.sh@733 -- # python - 00:46:26.705 14:07:24 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:46:26.705 14:07:24 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:46:26.705 /tmp/:spdk-test:key1 00:46:26.705 14:07:24 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=85924 00:46:26.705 14:07:24 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:46:26.705 14:07:24 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 85924 00:46:26.705 14:07:24 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 85924 ']' 00:46:26.705 14:07:24 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:26.705 14:07:24 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:26.705 14:07:24 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:26.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:26.705 14:07:24 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:26.705 14:07:24 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:46:26.965 [2024-11-20 14:07:24.062916] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:46:26.965 [2024-11-20 14:07:24.063117] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85924 ] 00:46:26.965 [2024-11-20 14:07:24.212191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:26.965 [2024-11-20 14:07:24.258942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:46:27.223 [2024-11-20 14:07:24.322678] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:46:27.790 14:07:24 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:27.790 14:07:24 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:46:27.790 14:07:24 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:46:27.790 14:07:24 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:27.790 14:07:24 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:46:27.790 [2024-11-20 14:07:24.967250] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:27.790 null0 00:46:27.790 [2024-11-20 14:07:24.999212] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:46:27.790 [2024-11-20 14:07:24.999354] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:46:27.790 14:07:25 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:27.790 14:07:25 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:46:27.790 114098902 00:46:27.790 14:07:25 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:46:27.790 162403051 00:46:27.790 14:07:25 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:46:27.790 14:07:25 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=85942 00:46:27.790 14:07:25 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 85942 /var/tmp/bperf.sock 00:46:27.790 14:07:25 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 85942 ']' 00:46:27.790 14:07:25 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:46:27.790 14:07:25 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:27.790 14:07:25 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:46:27.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:46:27.790 14:07:25 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:27.790 14:07:25 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:46:27.790 [2024-11-20 14:07:25.082902] Starting SPDK v25.01-pre git sha1 f9d18d578 / DPDK 24.03.0 initialization... 00:46:27.790 [2024-11-20 14:07:25.083027] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85942 ] 00:46:28.049 [2024-11-20 14:07:25.210537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:28.049 [2024-11-20 14:07:25.268133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:46:28.985 14:07:25 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:28.985 14:07:25 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:46:28.985 14:07:25 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:46:28.985 14:07:25 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:46:28.985 14:07:26 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:46:28.985 14:07:26 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:46:29.248 [2024-11-20 14:07:26.505359] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:46:29.506 14:07:26 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:46:29.507 14:07:26 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:46:29.507 [2024-11-20 14:07:26.794581] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:46:29.766 nvme0n1 00:46:29.766 14:07:26 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:46:29.766 14:07:26 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:46:29.766 14:07:26 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:46:29.766 14:07:26 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:46:29.766 14:07:26 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:46:29.766 14:07:26 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:30.024 14:07:27 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:46:30.024 14:07:27 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:46:30.024 14:07:27 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:46:30.024 14:07:27 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:46:30.024 14:07:27 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:30.024 14:07:27 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:46:30.024 14:07:27 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:30.283 14:07:27 keyring_linux -- keyring/linux.sh@25 -- # sn=114098902 00:46:30.283 14:07:27 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:46:30.283 14:07:27 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:46:30.283 14:07:27 keyring_linux -- keyring/linux.sh@26 -- # [[ 114098902 == \1\1\4\0\9\8\9\0\2 ]] 00:46:30.283 14:07:27 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 114098902 00:46:30.283 14:07:27 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:46:30.283 14:07:27 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:46:30.283 Running I/O for 1 seconds... 00:46:31.220 20462.00 IOPS, 79.93 MiB/s 00:46:31.220 Latency(us) 00:46:31.220 [2024-11-20T14:07:28.543Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:31.220 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:46:31.220 nvme0n1 : 1.01 20454.02 79.90 0.00 0.00 6234.93 5036.83 12821.02 00:46:31.220 [2024-11-20T14:07:28.543Z] =================================================================================================================== 00:46:31.220 [2024-11-20T14:07:28.543Z] Total : 20454.02 79.90 0.00 0.00 6234.93 5036.83 12821.02 00:46:31.220 { 00:46:31.220 "results": [ 00:46:31.220 { 00:46:31.220 "job": "nvme0n1", 00:46:31.220 "core_mask": "0x2", 00:46:31.220 "workload": "randread", 00:46:31.220 "status": "finished", 00:46:31.220 "queue_depth": 128, 00:46:31.220 "io_size": 4096, 00:46:31.220 "runtime": 1.006648, 00:46:31.220 "iops": 20454.021663977874, 00:46:31.220 "mibps": 79.89852212491357, 00:46:31.220 "io_failed": 0, 00:46:31.220 "io_timeout": 0, 00:46:31.220 "avg_latency_us": 6234.926646292451, 00:46:31.220 "min_latency_us": 5036.827947598254, 00:46:31.220 "max_latency_us": 12821.016593886463 00:46:31.220 } 00:46:31.220 ], 00:46:31.220 "core_count": 1 00:46:31.220 } 00:46:31.220 14:07:28 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:46:31.220 14:07:28 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:46:31.480 14:07:28 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:46:31.480 14:07:28 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:46:31.480 14:07:28 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:46:31.480 14:07:28 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:46:31.480 14:07:28 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:46:31.480 14:07:28 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:31.750 14:07:28 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:46:31.750 14:07:28 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:46:31.750 14:07:28 keyring_linux -- keyring/linux.sh@23 -- # return 00:46:31.750 14:07:28 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:46:31.750 14:07:28 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:46:31.750 14:07:28 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:46:31.750 14:07:28 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:46:31.750 14:07:28 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:31.750 14:07:28 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:46:31.750 14:07:28 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:31.750 14:07:28 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:46:31.750 14:07:28 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:46:32.021 [2024-11-20 14:07:29.173191] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:46:32.021 [2024-11-20 14:07:29.173299] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x152f5d0 (107): Transport endpoint is not connected 00:46:32.021 [2024-11-20 14:07:29.174289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x152f5d0 (9): Bad file descriptor 00:46:32.021 [2024-11-20 14:07:29.175284] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:46:32.021 [2024-11-20 14:07:29.175340] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:46:32.021 [2024-11-20 14:07:29.175391] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:46:32.021 [2024-11-20 14:07:29.175445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:46:32.021 request: 00:46:32.021 { 00:46:32.021 "name": "nvme0", 00:46:32.021 "trtype": "tcp", 00:46:32.021 "traddr": "127.0.0.1", 00:46:32.021 "adrfam": "ipv4", 00:46:32.021 "trsvcid": "4420", 00:46:32.021 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:32.021 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:32.021 "prchk_reftag": false, 00:46:32.021 "prchk_guard": false, 00:46:32.021 "hdgst": false, 00:46:32.021 "ddgst": false, 00:46:32.021 "psk": ":spdk-test:key1", 00:46:32.021 "allow_unrecognized_csi": false, 00:46:32.021 "method": "bdev_nvme_attach_controller", 00:46:32.021 "req_id": 1 00:46:32.021 } 00:46:32.021 Got JSON-RPC error response 00:46:32.021 response: 00:46:32.021 { 00:46:32.021 "code": -5, 00:46:32.021 "message": "Input/output error" 00:46:32.021 } 00:46:32.021 14:07:29 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:46:32.021 14:07:29 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:46:32.021 14:07:29 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:46:32.021 14:07:29 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:46:32.021 14:07:29 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:46:32.021 14:07:29 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:46:32.021 14:07:29 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:46:32.021 14:07:29 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:46:32.021 14:07:29 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:46:32.021 14:07:29 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:46:32.021 14:07:29 keyring_linux -- keyring/linux.sh@33 -- # sn=114098902 00:46:32.021 14:07:29 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 114098902 00:46:32.021 1 links removed 00:46:32.021 14:07:29 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:46:32.021 14:07:29 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:46:32.021 14:07:29 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:46:32.021 14:07:29 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:46:32.021 14:07:29 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:46:32.021 14:07:29 keyring_linux -- keyring/linux.sh@33 -- # sn=162403051 00:46:32.021 14:07:29 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 162403051 00:46:32.021 1 links removed 00:46:32.021 14:07:29 keyring_linux -- keyring/linux.sh@41 -- # killprocess 85942 00:46:32.021 14:07:29 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 85942 ']' 00:46:32.021 14:07:29 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 85942 00:46:32.021 14:07:29 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:46:32.021 14:07:29 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:32.021 14:07:29 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85942 00:46:32.021 14:07:29 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:46:32.022 14:07:29 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:46:32.022 14:07:29 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85942' 00:46:32.022 killing process with pid 85942 00:46:32.022 14:07:29 keyring_linux -- common/autotest_common.sh@973 -- # kill 85942 00:46:32.022 Received shutdown signal, test time was about 1.000000 seconds 00:46:32.022 00:46:32.022 Latency(us) 00:46:32.022 [2024-11-20T14:07:29.345Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:32.022 [2024-11-20T14:07:29.345Z] =================================================================================================================== 00:46:32.022 [2024-11-20T14:07:29.345Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:46:32.022 14:07:29 keyring_linux -- common/autotest_common.sh@978 -- # wait 85942 00:46:32.301 14:07:29 keyring_linux -- keyring/linux.sh@42 -- # killprocess 85924 00:46:32.301 14:07:29 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 85924 ']' 00:46:32.301 14:07:29 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 85924 00:46:32.301 14:07:29 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:46:32.301 14:07:29 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:32.301 14:07:29 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85924 00:46:32.301 killing process with pid 85924 00:46:32.301 14:07:29 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:46:32.301 14:07:29 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:46:32.301 14:07:29 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85924' 00:46:32.301 14:07:29 keyring_linux -- common/autotest_common.sh@973 -- # kill 85924 00:46:32.301 14:07:29 keyring_linux -- common/autotest_common.sh@978 -- # wait 85924 00:46:32.869 00:46:32.869 real 0m6.409s 00:46:32.869 user 0m11.890s 00:46:32.869 sys 0m1.677s 00:46:32.869 ************************************ 00:46:32.869 END TEST keyring_linux 00:46:32.869 ************************************ 00:46:32.869 14:07:30 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:32.869 14:07:30 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:46:32.869 14:07:30 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:46:32.869 14:07:30 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:46:32.869 14:07:30 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:46:32.869 14:07:30 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:46:32.869 14:07:30 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:46:32.869 14:07:30 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:46:32.869 14:07:30 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:46:32.869 14:07:30 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:46:32.869 14:07:30 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:46:32.869 14:07:30 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:46:32.869 14:07:30 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:46:32.869 14:07:30 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:46:32.869 14:07:30 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:46:32.869 14:07:30 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:46:32.869 14:07:30 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:46:32.869 14:07:30 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:46:32.869 14:07:30 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:46:32.869 14:07:30 -- common/autotest_common.sh@726 -- # xtrace_disable 00:46:32.869 14:07:30 -- common/autotest_common.sh@10 -- # set +x 00:46:32.869 14:07:30 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:46:32.869 14:07:30 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:46:32.869 14:07:30 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:46:32.869 14:07:30 -- common/autotest_common.sh@10 -- # set +x 00:46:35.410 INFO: APP EXITING 00:46:35.410 INFO: killing all VMs 00:46:35.410 INFO: killing vhost app 00:46:35.410 INFO: EXIT DONE 00:46:35.981 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:46:35.981 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:46:36.241 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:46:37.182 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:46:37.182 Cleaning 00:46:37.182 Removing: /var/run/dpdk/spdk0/config 00:46:37.182 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:46:37.182 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:46:37.182 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:46:37.182 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:46:37.182 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:46:37.182 Removing: /var/run/dpdk/spdk0/hugepage_info 00:46:37.182 Removing: /var/run/dpdk/spdk1/config 00:46:37.182 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:46:37.182 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:46:37.182 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:46:37.182 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:46:37.182 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:46:37.182 Removing: /var/run/dpdk/spdk1/hugepage_info 00:46:37.182 Removing: /var/run/dpdk/spdk2/config 00:46:37.182 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:46:37.182 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:46:37.182 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:46:37.182 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:46:37.182 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:46:37.182 Removing: /var/run/dpdk/spdk2/hugepage_info 00:46:37.182 Removing: /var/run/dpdk/spdk3/config 00:46:37.182 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:46:37.182 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:46:37.182 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:46:37.182 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:46:37.182 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:46:37.182 Removing: /var/run/dpdk/spdk3/hugepage_info 00:46:37.182 Removing: /var/run/dpdk/spdk4/config 00:46:37.182 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:46:37.182 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:46:37.182 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:46:37.182 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:46:37.182 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:46:37.182 Removing: /var/run/dpdk/spdk4/hugepage_info 00:46:37.182 Removing: /dev/shm/nvmf_trace.0 00:46:37.182 Removing: /dev/shm/spdk_tgt_trace.pid56937 00:46:37.182 Removing: /var/run/dpdk/spdk0 00:46:37.182 Removing: /var/run/dpdk/spdk1 00:46:37.182 Removing: /var/run/dpdk/spdk2 00:46:37.182 Removing: /var/run/dpdk/spdk3 00:46:37.182 Removing: /var/run/dpdk/spdk4 00:46:37.182 Removing: /var/run/dpdk/spdk_pid56784 00:46:37.182 Removing: /var/run/dpdk/spdk_pid56937 00:46:37.182 Removing: /var/run/dpdk/spdk_pid57138 00:46:37.182 Removing: /var/run/dpdk/spdk_pid57224 00:46:37.182 Removing: /var/run/dpdk/spdk_pid57246 00:46:37.182 Removing: /var/run/dpdk/spdk_pid57361 00:46:37.182 Removing: /var/run/dpdk/spdk_pid57379 00:46:37.182 Removing: /var/run/dpdk/spdk_pid57513 00:46:37.182 Removing: /var/run/dpdk/spdk_pid57707 00:46:37.182 Removing: /var/run/dpdk/spdk_pid57857 00:46:37.182 Removing: /var/run/dpdk/spdk_pid57935 00:46:37.182 Removing: /var/run/dpdk/spdk_pid58019 00:46:37.182 Removing: /var/run/dpdk/spdk_pid58118 00:46:37.182 Removing: /var/run/dpdk/spdk_pid58190 00:46:37.182 Removing: /var/run/dpdk/spdk_pid58223 00:46:37.182 Removing: /var/run/dpdk/spdk_pid58259 00:46:37.182 Removing: /var/run/dpdk/spdk_pid58328 00:46:37.182 Removing: /var/run/dpdk/spdk_pid58444 00:46:37.182 Removing: /var/run/dpdk/spdk_pid58885 00:46:37.182 Removing: /var/run/dpdk/spdk_pid58937 00:46:37.182 Removing: /var/run/dpdk/spdk_pid58977 00:46:37.182 Removing: /var/run/dpdk/spdk_pid58991 00:46:37.182 Removing: /var/run/dpdk/spdk_pid59047 00:46:37.442 Removing: /var/run/dpdk/spdk_pid59063 00:46:37.442 Removing: /var/run/dpdk/spdk_pid59130 00:46:37.442 Removing: /var/run/dpdk/spdk_pid59146 00:46:37.442 Removing: /var/run/dpdk/spdk_pid59186 00:46:37.442 Removing: /var/run/dpdk/spdk_pid59204 00:46:37.442 Removing: /var/run/dpdk/spdk_pid59244 00:46:37.442 Removing: /var/run/dpdk/spdk_pid59262 00:46:37.442 Removing: /var/run/dpdk/spdk_pid59398 00:46:37.442 Removing: /var/run/dpdk/spdk_pid59428 00:46:37.442 Removing: /var/run/dpdk/spdk_pid59511 00:46:37.442 Removing: /var/run/dpdk/spdk_pid59845 00:46:37.442 Removing: /var/run/dpdk/spdk_pid59861 00:46:37.442 Removing: /var/run/dpdk/spdk_pid59893 00:46:37.442 Removing: /var/run/dpdk/spdk_pid59907 00:46:37.442 Removing: /var/run/dpdk/spdk_pid59922 00:46:37.442 Removing: /var/run/dpdk/spdk_pid59941 00:46:37.442 Removing: /var/run/dpdk/spdk_pid59955 00:46:37.442 Removing: /var/run/dpdk/spdk_pid59970 00:46:37.442 Removing: /var/run/dpdk/spdk_pid59989 00:46:37.442 Removing: /var/run/dpdk/spdk_pid60003 00:46:37.442 Removing: /var/run/dpdk/spdk_pid60024 00:46:37.442 Removing: /var/run/dpdk/spdk_pid60043 00:46:37.443 Removing: /var/run/dpdk/spdk_pid60051 00:46:37.443 Removing: /var/run/dpdk/spdk_pid60072 00:46:37.443 Removing: /var/run/dpdk/spdk_pid60091 00:46:37.443 Removing: /var/run/dpdk/spdk_pid60101 00:46:37.443 Removing: /var/run/dpdk/spdk_pid60122 00:46:37.443 Removing: /var/run/dpdk/spdk_pid60141 00:46:37.443 Removing: /var/run/dpdk/spdk_pid60153 00:46:37.443 Removing: /var/run/dpdk/spdk_pid60170 00:46:37.443 Removing: /var/run/dpdk/spdk_pid60206 00:46:37.443 Removing: /var/run/dpdk/spdk_pid60214 00:46:37.443 Removing: /var/run/dpdk/spdk_pid60249 00:46:37.443 Removing: /var/run/dpdk/spdk_pid60321 00:46:37.443 Removing: /var/run/dpdk/spdk_pid60344 00:46:37.443 Removing: /var/run/dpdk/spdk_pid60359 00:46:37.443 Removing: /var/run/dpdk/spdk_pid60382 00:46:37.443 Removing: /var/run/dpdk/spdk_pid60397 00:46:37.443 Removing: /var/run/dpdk/spdk_pid60400 00:46:37.443 Removing: /var/run/dpdk/spdk_pid60448 00:46:37.443 Removing: /var/run/dpdk/spdk_pid60456 00:46:37.443 Removing: /var/run/dpdk/spdk_pid60490 00:46:37.443 Removing: /var/run/dpdk/spdk_pid60494 00:46:37.443 Removing: /var/run/dpdk/spdk_pid60509 00:46:37.443 Removing: /var/run/dpdk/spdk_pid60513 00:46:37.443 Removing: /var/run/dpdk/spdk_pid60527 00:46:37.443 Removing: /var/run/dpdk/spdk_pid60534 00:46:37.443 Removing: /var/run/dpdk/spdk_pid60544 00:46:37.443 Removing: /var/run/dpdk/spdk_pid60553 00:46:37.443 Removing: /var/run/dpdk/spdk_pid60586 00:46:37.443 Removing: /var/run/dpdk/spdk_pid60608 00:46:37.443 Removing: /var/run/dpdk/spdk_pid60623 00:46:37.443 Removing: /var/run/dpdk/spdk_pid60646 00:46:37.443 Removing: /var/run/dpdk/spdk_pid60661 00:46:37.443 Removing: /var/run/dpdk/spdk_pid60663 00:46:37.443 Removing: /var/run/dpdk/spdk_pid60708 00:46:37.443 Removing: /var/run/dpdk/spdk_pid60715 00:46:37.443 Removing: /var/run/dpdk/spdk_pid60747 00:46:37.443 Removing: /var/run/dpdk/spdk_pid60749 00:46:37.703 Removing: /var/run/dpdk/spdk_pid60761 00:46:37.703 Removing: /var/run/dpdk/spdk_pid60764 00:46:37.703 Removing: /var/run/dpdk/spdk_pid60776 00:46:37.703 Removing: /var/run/dpdk/spdk_pid60779 00:46:37.703 Removing: /var/run/dpdk/spdk_pid60791 00:46:37.703 Removing: /var/run/dpdk/spdk_pid60794 00:46:37.703 Removing: /var/run/dpdk/spdk_pid60876 00:46:37.703 Removing: /var/run/dpdk/spdk_pid60929 00:46:37.703 Removing: /var/run/dpdk/spdk_pid61047 00:46:37.703 Removing: /var/run/dpdk/spdk_pid61084 00:46:37.703 Removing: /var/run/dpdk/spdk_pid61126 00:46:37.703 Removing: /var/run/dpdk/spdk_pid61146 00:46:37.703 Removing: /var/run/dpdk/spdk_pid61162 00:46:37.703 Removing: /var/run/dpdk/spdk_pid61177 00:46:37.703 Removing: /var/run/dpdk/spdk_pid61214 00:46:37.703 Removing: /var/run/dpdk/spdk_pid61235 00:46:37.703 Removing: /var/run/dpdk/spdk_pid61314 00:46:37.703 Removing: /var/run/dpdk/spdk_pid61332 00:46:37.703 Removing: /var/run/dpdk/spdk_pid61376 00:46:37.703 Removing: /var/run/dpdk/spdk_pid61457 00:46:37.703 Removing: /var/run/dpdk/spdk_pid61513 00:46:37.703 Removing: /var/run/dpdk/spdk_pid61544 00:46:37.703 Removing: /var/run/dpdk/spdk_pid61642 00:46:37.703 Removing: /var/run/dpdk/spdk_pid61690 00:46:37.703 Removing: /var/run/dpdk/spdk_pid61722 00:46:37.703 Removing: /var/run/dpdk/spdk_pid61954 00:46:37.703 Removing: /var/run/dpdk/spdk_pid62052 00:46:37.703 Removing: /var/run/dpdk/spdk_pid62080 00:46:37.703 Removing: /var/run/dpdk/spdk_pid62110 00:46:37.703 Removing: /var/run/dpdk/spdk_pid62143 00:46:37.703 Removing: /var/run/dpdk/spdk_pid62177 00:46:37.703 Removing: /var/run/dpdk/spdk_pid62218 00:46:37.703 Removing: /var/run/dpdk/spdk_pid62253 00:46:37.703 Removing: /var/run/dpdk/spdk_pid62648 00:46:37.703 Removing: /var/run/dpdk/spdk_pid62686 00:46:37.703 Removing: /var/run/dpdk/spdk_pid63036 00:46:37.703 Removing: /var/run/dpdk/spdk_pid63495 00:46:37.703 Removing: /var/run/dpdk/spdk_pid63765 00:46:37.703 Removing: /var/run/dpdk/spdk_pid64654 00:46:37.703 Removing: /var/run/dpdk/spdk_pid65584 00:46:37.703 Removing: /var/run/dpdk/spdk_pid65707 00:46:37.703 Removing: /var/run/dpdk/spdk_pid65769 00:46:37.703 Removing: /var/run/dpdk/spdk_pid67192 00:46:37.703 Removing: /var/run/dpdk/spdk_pid67508 00:46:37.703 Removing: /var/run/dpdk/spdk_pid70912 00:46:37.703 Removing: /var/run/dpdk/spdk_pid71272 00:46:37.703 Removing: /var/run/dpdk/spdk_pid71375 00:46:37.703 Removing: /var/run/dpdk/spdk_pid71515 00:46:37.703 Removing: /var/run/dpdk/spdk_pid71538 00:46:37.703 Removing: /var/run/dpdk/spdk_pid71572 00:46:37.703 Removing: /var/run/dpdk/spdk_pid71595 00:46:37.704 Removing: /var/run/dpdk/spdk_pid71695 00:46:37.704 Removing: /var/run/dpdk/spdk_pid71830 00:46:37.704 Removing: /var/run/dpdk/spdk_pid72006 00:46:37.704 Removing: /var/run/dpdk/spdk_pid72089 00:46:37.704 Removing: /var/run/dpdk/spdk_pid72277 00:46:37.704 Removing: /var/run/dpdk/spdk_pid72365 00:46:37.704 Removing: /var/run/dpdk/spdk_pid72453 00:46:37.704 Removing: /var/run/dpdk/spdk_pid72815 00:46:37.704 Removing: /var/run/dpdk/spdk_pid73235 00:46:37.704 Removing: /var/run/dpdk/spdk_pid73236 00:46:37.964 Removing: /var/run/dpdk/spdk_pid73237 00:46:37.964 Removing: /var/run/dpdk/spdk_pid73506 00:46:37.964 Removing: /var/run/dpdk/spdk_pid73771 00:46:37.964 Removing: /var/run/dpdk/spdk_pid74171 00:46:37.964 Removing: /var/run/dpdk/spdk_pid74178 00:46:37.964 Removing: /var/run/dpdk/spdk_pid74502 00:46:37.964 Removing: /var/run/dpdk/spdk_pid74516 00:46:37.964 Removing: /var/run/dpdk/spdk_pid74541 00:46:37.964 Removing: /var/run/dpdk/spdk_pid74566 00:46:37.964 Removing: /var/run/dpdk/spdk_pid74571 00:46:37.964 Removing: /var/run/dpdk/spdk_pid74937 00:46:37.964 Removing: /var/run/dpdk/spdk_pid74990 00:46:37.964 Removing: /var/run/dpdk/spdk_pid75330 00:46:37.964 Removing: /var/run/dpdk/spdk_pid75524 00:46:37.964 Removing: /var/run/dpdk/spdk_pid75957 00:46:37.964 Removing: /var/run/dpdk/spdk_pid76513 00:46:37.964 Removing: /var/run/dpdk/spdk_pid77363 00:46:37.964 Removing: /var/run/dpdk/spdk_pid78007 00:46:37.964 Removing: /var/run/dpdk/spdk_pid78010 00:46:37.964 Removing: /var/run/dpdk/spdk_pid80054 00:46:37.964 Removing: /var/run/dpdk/spdk_pid80110 00:46:37.964 Removing: /var/run/dpdk/spdk_pid80170 00:46:37.964 Removing: /var/run/dpdk/spdk_pid80234 00:46:37.964 Removing: /var/run/dpdk/spdk_pid80352 00:46:37.964 Removing: /var/run/dpdk/spdk_pid80407 00:46:37.964 Removing: /var/run/dpdk/spdk_pid80467 00:46:37.964 Removing: /var/run/dpdk/spdk_pid80527 00:46:37.964 Removing: /var/run/dpdk/spdk_pid80919 00:46:37.964 Removing: /var/run/dpdk/spdk_pid82131 00:46:37.964 Removing: /var/run/dpdk/spdk_pid82277 00:46:37.964 Removing: /var/run/dpdk/spdk_pid82524 00:46:37.964 Removing: /var/run/dpdk/spdk_pid83132 00:46:37.964 Removing: /var/run/dpdk/spdk_pid83297 00:46:37.964 Removing: /var/run/dpdk/spdk_pid83455 00:46:37.964 Removing: /var/run/dpdk/spdk_pid83554 00:46:37.964 Removing: /var/run/dpdk/spdk_pid83716 00:46:37.964 Removing: /var/run/dpdk/spdk_pid83825 00:46:37.964 Removing: /var/run/dpdk/spdk_pid84549 00:46:37.964 Removing: /var/run/dpdk/spdk_pid84583 00:46:37.964 Removing: /var/run/dpdk/spdk_pid84618 00:46:37.964 Removing: /var/run/dpdk/spdk_pid84885 00:46:37.964 Removing: /var/run/dpdk/spdk_pid84920 00:46:37.964 Removing: /var/run/dpdk/spdk_pid84955 00:46:37.964 Removing: /var/run/dpdk/spdk_pid85546 00:46:37.964 Removing: /var/run/dpdk/spdk_pid85562 00:46:37.964 Removing: /var/run/dpdk/spdk_pid85797 00:46:37.964 Removing: /var/run/dpdk/spdk_pid85924 00:46:37.964 Removing: /var/run/dpdk/spdk_pid85942 00:46:37.964 Clean 00:46:38.223 14:07:35 -- common/autotest_common.sh@1453 -- # return 0 00:46:38.223 14:07:35 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:46:38.223 14:07:35 -- common/autotest_common.sh@732 -- # xtrace_disable 00:46:38.223 14:07:35 -- common/autotest_common.sh@10 -- # set +x 00:46:38.223 14:07:35 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:46:38.223 14:07:35 -- common/autotest_common.sh@732 -- # xtrace_disable 00:46:38.223 14:07:35 -- common/autotest_common.sh@10 -- # set +x 00:46:38.223 14:07:35 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:46:38.224 14:07:35 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:46:38.224 14:07:35 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:46:38.224 14:07:35 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:46:38.224 14:07:35 -- spdk/autotest.sh@398 -- # hostname 00:46:38.224 14:07:35 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:46:38.483 geninfo: WARNING: invalid characters removed from testname! 00:47:05.084 14:07:59 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:47:05.342 14:08:02 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:47:07.877 14:08:04 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:47:09.816 14:08:06 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:47:11.718 14:08:08 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:47:14.254 14:08:11 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:47:16.193 14:08:13 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:47:16.193 14:08:13 -- spdk/autorun.sh@1 -- $ timing_finish 00:47:16.193 14:08:13 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:47:16.193 14:08:13 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:47:16.193 14:08:13 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:47:16.193 14:08:13 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:47:16.193 + [[ -n 5425 ]] 00:47:16.193 + sudo kill 5425 00:47:16.204 [Pipeline] } 00:47:16.220 [Pipeline] // timeout 00:47:16.226 [Pipeline] } 00:47:16.241 [Pipeline] // stage 00:47:16.247 [Pipeline] } 00:47:16.261 [Pipeline] // catchError 00:47:16.271 [Pipeline] stage 00:47:16.274 [Pipeline] { (Stop VM) 00:47:16.287 [Pipeline] sh 00:47:16.569 + vagrant halt 00:47:19.108 ==> default: Halting domain... 00:47:27.265 [Pipeline] sh 00:47:27.548 + vagrant destroy -f 00:47:30.088 ==> default: Removing domain... 00:47:30.359 [Pipeline] sh 00:47:30.642 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/output 00:47:30.653 [Pipeline] } 00:47:30.668 [Pipeline] // stage 00:47:30.673 [Pipeline] } 00:47:30.687 [Pipeline] // dir 00:47:30.693 [Pipeline] } 00:47:30.707 [Pipeline] // wrap 00:47:30.714 [Pipeline] } 00:47:30.727 [Pipeline] // catchError 00:47:30.737 [Pipeline] stage 00:47:30.740 [Pipeline] { (Epilogue) 00:47:30.755 [Pipeline] sh 00:47:31.039 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:47:36.339 [Pipeline] catchError 00:47:36.342 [Pipeline] { 00:47:36.359 [Pipeline] sh 00:47:36.648 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:47:36.648 Artifacts sizes are good 00:47:36.657 [Pipeline] } 00:47:36.674 [Pipeline] // catchError 00:47:36.686 [Pipeline] archiveArtifacts 00:47:36.694 Archiving artifacts 00:47:36.836 [Pipeline] cleanWs 00:47:36.847 [WS-CLEANUP] Deleting project workspace... 00:47:36.847 [WS-CLEANUP] Deferred wipeout is used... 00:47:36.854 [WS-CLEANUP] done 00:47:36.856 [Pipeline] } 00:47:36.872 [Pipeline] // stage 00:47:36.877 [Pipeline] } 00:47:36.891 [Pipeline] // node 00:47:36.896 [Pipeline] End of Pipeline 00:47:36.936 Finished: SUCCESS